A large number of Internet-of-Things (IoT) devices will soon populate our physical environments. Yet, IoT devices’ reliance on mobile applications and voice-only assistants as the primary interface limits their scalability and expressiveness. Building off of the classic ‘Put-That-There’ system, we contribute an exploration of the design space of voice + gesture interaction with spatially-distributed IoT devices. Our design space decomposes users’ IoT commands into two components—selection and interaction. We articulate how the permutations of voice and freehand gesture for these two components can complementarily afford interaction possibilities that go beyond current approaches. We instantiate this design space as a proof-of-concept sensing platform and demonstrate a series of novel IoT interaction scenarios, such as making ‘dumb’ objects smart, commanding robotic appliances, and resolving ambiguous pointing at cluttered devices.
{"title":"Minuet: Multimodal Interaction with an Internet of Things","authors":"Runchang Kang, Anhong Guo, Gierad Laput, Y. Li, Xiang 'Anthony' Chen","doi":"10.1145/3357251.3357581","DOIUrl":"https://doi.org/10.1145/3357251.3357581","url":null,"abstract":"A large number of Internet-of-Things (IoT) devices will soon populate our physical environments. Yet, IoT devices’ reliance on mobile applications and voice-only assistants as the primary interface limits their scalability and expressiveness. Building off of the classic ‘Put-That-There’ system, we contribute an exploration of the design space of voice + gesture interaction with spatially-distributed IoT devices. Our design space decomposes users’ IoT commands into two components—selection and interaction. We articulate how the permutations of voice and freehand gesture for these two components can complementarily afford interaction possibilities that go beyond current approaches. We instantiate this design space as a proof-of-concept sensing platform and demonstrate a series of novel IoT interaction scenarios, such as making ‘dumb’ objects smart, commanding robotic appliances, and resolving ambiguous pointing at cluttered devices.","PeriodicalId":370782,"journal":{"name":"Symposium on Spatial User Interaction","volume":"80 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125450594","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pascal Chiu, Kazuki Takashima, Kazuyuki Fujita, Y. Kitamura
Field of view limitations are one of the major persisting setbacks for camera-based motion tracking systems and the need for flexible ways to improve capture volumes remains. We present Pursuit Sensing, a technique to considerably extend the tracking volume of a camera sensor through self-actuated reorientation using a customized gimbal, thus enabling a Leap Motion to dynamically follow the user’s hand position in mobile HMD scenarios. This technique provides accessibility and high hardware compatibility for both users and developers while remaining simple and inexpensive to implement. Our technical evaluation shows that the proposed solution successfully increases hand tracking volume by 142% in pitch and 44% in yaw compared to the camera’s base FOV, while featuring low latency and robustness against fast hand movements.
{"title":"Pursuit Sensing: Extending Hand Tracking Space in Mobile VR Applications","authors":"Pascal Chiu, Kazuki Takashima, Kazuyuki Fujita, Y. Kitamura","doi":"10.1145/3357251.3357578","DOIUrl":"https://doi.org/10.1145/3357251.3357578","url":null,"abstract":"Field of view limitations are one of the major persisting setbacks for camera-based motion tracking systems and the need for flexible ways to improve capture volumes remains. We present Pursuit Sensing, a technique to considerably extend the tracking volume of a camera sensor through self-actuated reorientation using a customized gimbal, thus enabling a Leap Motion to dynamically follow the user’s hand position in mobile HMD scenarios. This technique provides accessibility and high hardware compatibility for both users and developers while remaining simple and inexpensive to implement. Our technical evaluation shows that the proposed solution successfully increases hand tracking volume by 142% in pitch and 44% in yaw compared to the camera’s base FOV, while featuring low latency and robustness against fast hand movements.","PeriodicalId":370782,"journal":{"name":"Symposium on Spatial User Interaction","volume":"83 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127193161","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Due to the mismatch in size between a Virtual Environment and the physical space available, the use of alternative locomotion techniques becomes necessary. In small spaces, Redirected Walking methods provide limited benefits and approaches such as the use of distractors can provide an alternative. Distractors are virtual elements or characters that attempt to catch the attention of the user while the system subtly steers them away from physical boundaries. In this research we explicitly focused on understanding how different levels of interactivity affect user performance and behaviour. We developed three types of continuous redirecting distractors, with varying levels of interaction possibilities, called Looking, Touching, and Interacting. We compared them in a user study to a discrete reorientation technique, called Stop and Reset, in a task requiring users to traverse a 30 m path. While discrete reorientation is faster, continuous redirection through distractors was significantly less noticeable. Results suggest that more complex interaction is preferred and able to better captivate user attention for longer.
{"title":"Investigating the Effect of Distractor Interactivity for Redirected Walking in Virtual Reality","authors":"Robbe Cools, A. Simeone","doi":"10.1145/3357251.3357580","DOIUrl":"https://doi.org/10.1145/3357251.3357580","url":null,"abstract":"Due to the mismatch in size between a Virtual Environment and the physical space available, the use of alternative locomotion techniques becomes necessary. In small spaces, Redirected Walking methods provide limited benefits and approaches such as the use of distractors can provide an alternative. Distractors are virtual elements or characters that attempt to catch the attention of the user while the system subtly steers them away from physical boundaries. In this research we explicitly focused on understanding how different levels of interactivity affect user performance and behaviour. We developed three types of continuous redirecting distractors, with varying levels of interaction possibilities, called Looking, Touching, and Interacting. We compared them in a user study to a discrete reorientation technique, called Stop and Reset, in a task requiring users to traverse a 30 m path. While discrete reorientation is faster, continuous redirection through distractors was significantly less noticeable. Results suggest that more complex interaction is preferred and able to better captivate user attention for longer.","PeriodicalId":370782,"journal":{"name":"Symposium on Spatial User Interaction","volume":"61 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126377324","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Symposium on Spatial User Interaction","authors":"","doi":"10.1145/3357251","DOIUrl":"https://doi.org/10.1145/3357251","url":null,"abstract":"","PeriodicalId":370782,"journal":{"name":"Symposium on Spatial User Interaction","volume":"67 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121536405","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A promising feature of wearable augmented reality devices is the ability to easily access information on the go. However, designing AR interfaces that can support user movement and also adjust to different physical environments is a challenging task. We present an interaction system for AR windows that uses adaptation to automatically perform level window movement while allowing high-level user control.
{"title":"An Adaptive Interface for Spatial Augmented Reality Workspaces","authors":"W. Lages, D. Bowman","doi":"10.1145/3357251.3360005","DOIUrl":"https://doi.org/10.1145/3357251.3360005","url":null,"abstract":"A promising feature of wearable augmented reality devices is the ability to easily access information on the go. However, designing AR interfaces that can support user movement and also adjust to different physical environments is a challenging task. We present an interaction system for AR windows that uses adaptation to automatically perform level window movement while allowing high-level user control.","PeriodicalId":370782,"journal":{"name":"Symposium on Spatial User Interaction","volume":"67 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124849785","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mixed reality (MR) environments allow real users and virtual agents to coexist within the same virtually augmented physical space. While tracking of different body parts such as the user’s head and hands allows virtual objects to show plausible reactions to actions of the real user, virtual agents only have a very limited influence on their physical environment. In this paper, we introduce the concept of blended agents, which are capable of manipulations of physical properties related to the object’s location and surface material. We present two prototypic implementations of virtual-physical interactions using robotic actuators and thermochromic ink. As both interactions show considerably different characteristics, e.g., with regard to their persistence, explicability, and observability, we performed a user study to investigate their effects on subjective measures such as the agent’s perceived social and spatial presence. In the context of a golf scenario, participants were interacting with a blended agent that was capable of virtual-physical manipulations such as hitting a golf ball and writing on physical paper. A statistical analysis of quantitative data did not yield any significant differences between blended agents and VAs without physical capabilities. However, qualitative feedback of the participants indicates that persistent manipulations improve both the perceived realism of the agent and the overall user experience.
{"title":"Blended Agents: Manipulation of Physical Objects within Mixed Reality Environments and Beyond","authors":"S. Schmidt, Oscar Ariza, Frank Steinicke","doi":"10.1145/3357251.3357591","DOIUrl":"https://doi.org/10.1145/3357251.3357591","url":null,"abstract":"Mixed reality (MR) environments allow real users and virtual agents to coexist within the same virtually augmented physical space. While tracking of different body parts such as the user’s head and hands allows virtual objects to show plausible reactions to actions of the real user, virtual agents only have a very limited influence on their physical environment. In this paper, we introduce the concept of blended agents, which are capable of manipulations of physical properties related to the object’s location and surface material. We present two prototypic implementations of virtual-physical interactions using robotic actuators and thermochromic ink. As both interactions show considerably different characteristics, e.g., with regard to their persistence, explicability, and observability, we performed a user study to investigate their effects on subjective measures such as the agent’s perceived social and spatial presence. In the context of a golf scenario, participants were interacting with a blended agent that was capable of virtual-physical manipulations such as hitting a golf ball and writing on physical paper. A statistical analysis of quantitative data did not yield any significant differences between blended agents and VAs without physical capabilities. However, qualitative feedback of the participants indicates that persistent manipulations improve both the perceived realism of the agent and the overall user experience.","PeriodicalId":370782,"journal":{"name":"Symposium on Spatial User Interaction","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130836208","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In collaborative tasks, it is often important for users to understand their collaborator’s gaze direction or gaze target. Using an augmented reality (AR) display, a ray representing the collaborator’s gaze can be used to convey such information. In wide-area AR, however, a simplistic virtual ray may be ambiguous at large distances, due to the lack of occlusion cues when a model of the environment is unavailable. We describe two novel visualization techniques designed to improve gaze ray effectiveness by facilitating visual matching between rays and targets (Double Ray technique), and by providing spatial cues to help users understand ray orientation (Parallel Bars technique). In a controlled experiment performed in a simulated AR environment, we evaluated these gaze ray techniques on target identification tasks with varying levels of difficulty. The experiment found that, assuming reliable tracking and an accurate collaborator, the Double Ray technique is highly effective at reducing visual ambiguity, but that users found it difficult to use the spatial information provided by the Parallel Bars technique. We discuss the implications of these findings for the design of collaborative mobile AR systems for use in large outdoor areas.
{"title":"Gaze Direction Visualization Techniques for Collaborative Wide-Area Model-Free Augmented Reality","authors":"Yuan Li, Feiyu Lu, W. Lages, D. Bowman","doi":"10.1145/3357251.3357583","DOIUrl":"https://doi.org/10.1145/3357251.3357583","url":null,"abstract":"In collaborative tasks, it is often important for users to understand their collaborator’s gaze direction or gaze target. Using an augmented reality (AR) display, a ray representing the collaborator’s gaze can be used to convey such information. In wide-area AR, however, a simplistic virtual ray may be ambiguous at large distances, due to the lack of occlusion cues when a model of the environment is unavailable. We describe two novel visualization techniques designed to improve gaze ray effectiveness by facilitating visual matching between rays and targets (Double Ray technique), and by providing spatial cues to help users understand ray orientation (Parallel Bars technique). In a controlled experiment performed in a simulated AR environment, we evaluated these gaze ray techniques on target identification tasks with varying levels of difficulty. The experiment found that, assuming reliable tracking and an accurate collaborator, the Double Ray technique is highly effective at reducing visual ambiguity, but that users found it difficult to use the spatial information provided by the Parallel Bars technique. We discuss the implications of these findings for the design of collaborative mobile AR systems for use in large outdoor areas.","PeriodicalId":370782,"journal":{"name":"Symposium on Spatial User Interaction","volume":"132 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134500073","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We propose a one-handed interaction technique using cursor based on touch pressure to enable users to perform various single-touch gestures such as a tap, swipe, drag, and double-tap on unreachable targets. In the proposed technique, cursor mode is started by swiping from the bezel. Touch-down and touch-up events occur at the cursor position when users increase and decrease touch pressure, respectively. Since touch-down and touch-up event triggers are different but easily performed by just adjusting the touch pressure of the thumb from low to high or vice versa, the user can perform single-touch gestures at the cursor position with the thumb. To investigate the performance of the proposed technique, we conducted a pilot study; the results showed that the proposed technique is promising for one-handed interaction technique.
{"title":"One-Handed Interaction Technique for Single-Touch Gesture Input on Large Smartphones","authors":"Kyohei Hakka, Toshiya Isomoto, B. Shizuki","doi":"10.1145/3357251.3358750","DOIUrl":"https://doi.org/10.1145/3357251.3358750","url":null,"abstract":"We propose a one-handed interaction technique using cursor based on touch pressure to enable users to perform various single-touch gestures such as a tap, swipe, drag, and double-tap on unreachable targets. In the proposed technique, cursor mode is started by swiping from the bezel. Touch-down and touch-up events occur at the cursor position when users increase and decrease touch pressure, respectively. Since touch-down and touch-up event triggers are different but easily performed by just adjusting the touch pressure of the thumb from low to high or vice versa, the user can perform single-touch gestures at the cursor position with the thumb. To investigate the performance of the proposed technique, we conducted a pilot study; the results showed that the proposed technique is promising for one-handed interaction technique.","PeriodicalId":370782,"journal":{"name":"Symposium on Spatial User Interaction","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125048469","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Andrew Rukangu, A. Tuttle, Anton Franzluebbers, K. Johnsen
We present a novel cane-based device for interacting with floors in Virtual Reality (VR). We demonstrate its versatility and flexibility in several use-case scenarios like gaming and menu interaction. Initial feedback from users point towards better control in spatial tasks and increased comfort for tasks which require the users’ arms to be raised or extended for extended periods. By including a networked example, we are able to explore the asymmetrical aspect of VR interaction using the V-Rod. We demonstrate that the hardware and circuitry can deliver acceptable performance even for demanding applications. In addition, we propose that using a grounded, passive haptic device gives the user a better sense of balance, therefore decreasing risk of VR sickness. VR Balance is a game that intends to quantify the difference in comfort, intuitiveness and accuracy when using or not using a grounded passive haptic device.
{"title":"V-Rod: Floor Interaction in VR","authors":"Andrew Rukangu, A. Tuttle, Anton Franzluebbers, K. Johnsen","doi":"10.1145/3357251.3358756","DOIUrl":"https://doi.org/10.1145/3357251.3358756","url":null,"abstract":"We present a novel cane-based device for interacting with floors in Virtual Reality (VR). We demonstrate its versatility and flexibility in several use-case scenarios like gaming and menu interaction. Initial feedback from users point towards better control in spatial tasks and increased comfort for tasks which require the users’ arms to be raised or extended for extended periods. By including a networked example, we are able to explore the asymmetrical aspect of VR interaction using the V-Rod. We demonstrate that the hardware and circuitry can deliver acceptable performance even for demanding applications. In addition, we propose that using a grounded, passive haptic device gives the user a better sense of balance, therefore decreasing risk of VR sickness. VR Balance is a game that intends to quantify the difference in comfort, intuitiveness and accuracy when using or not using a grounded passive haptic device.","PeriodicalId":370782,"journal":{"name":"Symposium on Spatial User Interaction","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125181859","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present SIGMA, a mass interaction system for playing games in movie theatres and arenas. SIGMA uses players’ smartphones as spatial game controllers. The games for SIGMA use novel techniques for aggregating mass interactions, which we introduce using “Little Red Riding Hood” interactive storybook as a case study.
{"title":"SIGMA: Spatial Interaction Gaming for Movie- and Arena-goers","authors":"Krzysztof Pietroszek","doi":"10.1145/3357251.3358758","DOIUrl":"https://doi.org/10.1145/3357251.3358758","url":null,"abstract":"We present SIGMA, a mass interaction system for playing games in movie theatres and arenas. SIGMA uses players’ smartphones as spatial game controllers. The games for SIGMA use novel techniques for aggregating mass interactions, which we introduce using “Little Red Riding Hood” interactive storybook as a case study.","PeriodicalId":370782,"journal":{"name":"Symposium on Spatial User Interaction","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131241248","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}