Spatial user interfaces that help people navigate often focus on turn-by-turn instructions, ignoring how they may help incidental learning of spatial knowledge. Drawing on theories and findings from the area of spatial cognition, the current paper aims to understand how turn-by-turn instructions and relative location updates can help incidental learning of spatial (route and survey) knowledge. A user study was conducted as people used map-based and video-based spatial interfaces to navigate to different locations in an indoor environment using turn-by-turn directions and relative location updates. Consistent with existing literature, we found that providing only turn-by-turn directions was in general not effective for helping people to acquire spatial knowledge as relative location updates, but map-based interfaces were in general better for incidental learning of survey knowledge while video-based interfaces were better for route knowledge. Our result suggested that relative location updates encourage active processing of spatial information, which allows better incidental learning of spatial knowledge. We discussed the implications of our results to designs trade-offs in navigation interfaces that facilitate learning of spatial knowledge.
{"title":"Getting There and Beyond: Incidental Learning of Spatial Knowledge with Turn-by-Turn Directions and Location Updates in Navigation Interfaces","authors":"S. Dey, Karrie Karahalios, W. Fu","doi":"10.1145/3267782.3267783","DOIUrl":"https://doi.org/10.1145/3267782.3267783","url":null,"abstract":"Spatial user interfaces that help people navigate often focus on turn-by-turn instructions, ignoring how they may help incidental learning of spatial knowledge. Drawing on theories and findings from the area of spatial cognition, the current paper aims to understand how turn-by-turn instructions and relative location updates can help incidental learning of spatial (route and survey) knowledge. A user study was conducted as people used map-based and video-based spatial interfaces to navigate to different locations in an indoor environment using turn-by-turn directions and relative location updates. Consistent with existing literature, we found that providing only turn-by-turn directions was in general not effective for helping people to acquire spatial knowledge as relative location updates, but map-based interfaces were in general better for incidental learning of survey knowledge while video-based interfaces were better for route knowledge. Our result suggested that relative location updates encourage active processing of spatial information, which allows better incidental learning of spatial knowledge. We discussed the implications of our results to designs trade-offs in navigation interfaces that facilitate learning of spatial knowledge.","PeriodicalId":126671,"journal":{"name":"Proceedings of the 2018 ACM Symposium on Spatial User Interaction","volume":"64 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126666874","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Philippe Wacker, Simon Voelker, Adrian Wagner, Jan O. Borchers
Besides sketching in mid-air, Augmented Reality (AR) lets users sketch 3D designs directly attached to existing physical objects. These objects provide natural haptic feedback whenever the pen touches them, and, unlike in VR, there is no need to digitize the physical object first. Especially in Personal Fabrication, this lets non-professional designers quickly create simple 3D models that fit existing physical objects, such as a lampshade for a lamp socket. We categorize guidance types of real objects into flat, concave, and convex surfaces, edges, and surface markings. We studied how accurately these guides let users draw 3D shapes attached to physical vs. virtual objects in AR. Results show that tracing physical objects is 48% more accurate, and can be performed in a similar time compared to virtual objects. Guides on physical objects further improve accuracy especially in the vertical direction. Our findings provide initial metrics when designing AR sketching systems.
{"title":"Physical Guides: An Analysis of 3D Sketching Performance on Physical Objects in Augmented Reality","authors":"Philippe Wacker, Simon Voelker, Adrian Wagner, Jan O. Borchers","doi":"10.1145/3267782.3267788","DOIUrl":"https://doi.org/10.1145/3267782.3267788","url":null,"abstract":"Besides sketching in mid-air, Augmented Reality (AR) lets users sketch 3D designs directly attached to existing physical objects. These objects provide natural haptic feedback whenever the pen touches them, and, unlike in VR, there is no need to digitize the physical object first. Especially in Personal Fabrication, this lets non-professional designers quickly create simple 3D models that fit existing physical objects, such as a lampshade for a lamp socket. We categorize guidance types of real objects into flat, concave, and convex surfaces, edges, and surface markings. We studied how accurately these guides let users draw 3D shapes attached to physical vs. virtual objects in AR. Results show that tracing physical objects is 48% more accurate, and can be performed in a similar time compared to virtual objects. Guides on physical objects further improve accuracy especially in the vertical direction. Our findings provide initial metrics when designing AR sketching systems.","PeriodicalId":126671,"journal":{"name":"Proceedings of the 2018 ACM Symposium on Spatial User Interaction","volume":"61 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125655295","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, a framework that uses deep learning on a server to recognize signboards in streets with mobile devices is proposed. The proposed framework enables a user to determine the type of shops in his/her location. Our experimental results revealed that the proposed framework recognized signboards with an 86% accuracy within 1 second.
{"title":"Real-Time Recognition of Signboards with Mobile Device using Deep Learning for Information Identification Support System","authors":"Shigeo Kitamura, Kota Kita, Mitsunori Matsushita","doi":"10.1145/3267782.3274674","DOIUrl":"https://doi.org/10.1145/3267782.3274674","url":null,"abstract":"In this paper, a framework that uses deep learning on a server to recognize signboards in streets with mobile devices is proposed. The proposed framework enables a user to determine the type of shops in his/her location. Our experimental results revealed that the proposed framework recognized signboards with an 86% accuracy within 1 second.","PeriodicalId":126671,"journal":{"name":"Proceedings of the 2018 ACM Symposium on Spatial User Interaction","volume":"74 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125761113","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Immersive technologies have been touted as empathetic mediums. This capability has yet to be fully explored through machine learning integration. Our demo seeks to explore proxemics in mixed-reality (MR) human-human interactions. The author developed a system, where spatial features can be manipulated in real time by identifying emotions corresponding to unique combinations of facial micro-expressions and tonal analysis. The Magic Leap One is used as the interactive interface, the first commercial spatial computing head mounted (virtual retinal) display (HUD). A novel spatial user interface visualization element is prototyped that leverages the affordances of mixed-reality by introducing both a spatial and affective component to interfaces.
{"title":"Using Affective Computing for Proxemic Interactions in Mixed-Reality","authors":"Jasmine Roberts","doi":"10.1145/3267782.3274692","DOIUrl":"https://doi.org/10.1145/3267782.3274692","url":null,"abstract":"Immersive technologies have been touted as empathetic mediums. This capability has yet to be fully explored through machine learning integration. Our demo seeks to explore proxemics in mixed-reality (MR) human-human interactions. The author developed a system, where spatial features can be manipulated in real time by identifying emotions corresponding to unique combinations of facial micro-expressions and tonal analysis. The Magic Leap One is used as the interactive interface, the first commercial spatial computing head mounted (virtual retinal) display (HUD). A novel spatial user interface visualization element is prototyped that leverages the affordances of mixed-reality by introducing both a spatial and affective component to interfaces.","PeriodicalId":126671,"journal":{"name":"Proceedings of the 2018 ACM Symposium on Spatial User Interaction","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132944026","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Figure 1. (a) Flip-Flop Sticker attached to the trackpad; (b) The structure of Flip-Flop Sticker; (c) Applying shear force (sliding to any optional direction) to PL is assigned as X/Y-axis operations and applying pressure force (pushing down) to PL/PR is assigned as Z-axis (+Z/-Z) operations; (d) The seesawlike mechanism enabling “upward” force feedback to a primary operation finger; (e) Flip-Flop Sticker can also be utilized as a head-mounted display adapter; (f) Applications : 3D (X/Y/Z) operation in 3D CAD and 3D Tic-Tac-Toe can be realized by orthogonal and intuitive operation.
{"title":"Flip-Flop Sticker: Force-to-Motion Type 3DoF Input Device for Capacitive Touch Surface","authors":"Kaori Ikematsu, M. Fukumoto, I. Siio","doi":"10.1145/3267782.3274686","DOIUrl":"https://doi.org/10.1145/3267782.3274686","url":null,"abstract":"Figure 1. (a) Flip-Flop Sticker attached to the trackpad; (b) The structure of Flip-Flop Sticker; (c) Applying shear force (sliding to any optional direction) to PL is assigned as X/Y-axis operations and applying pressure force (pushing down) to PL/PR is assigned as Z-axis (+Z/-Z) operations; (d) The seesawlike mechanism enabling “upward” force feedback to a primary operation finger; (e) Flip-Flop Sticker can also be utilized as a head-mounted display adapter; (f) Applications : 3D (X/Y/Z) operation in 3D CAD and 3D Tic-Tac-Toe can be realized by orthogonal and intuitive operation.","PeriodicalId":126671,"journal":{"name":"Proceedings of the 2018 ACM Symposium on Spatial User Interaction","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122096126","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Takayuki Kameoka, Yuki Kon, Takuto Nakamura, H. Kajimoto
Along with the spread of VR experiences by HMD, many proposals have been made to improve the experience by providing tactile information to the fingertip, but there are problems such as difficulty in attaching and detaching and hindering free movement of fingers. As a method to solve these issues, we developed Haptopus, which embeds the tactile display in the HMD and presents tactile sense associated with fingers to the face. In this paper, we conducted a preliminary investigation on the best suction pressure and compared with the conventional tactile presentation approaches. As a result, it was confirmed that Haptopus improves the quality of the VR experience.
{"title":"Haptopus: Transferring the Touch Sense of the Hand to the Face Using Suction Mechanism Embedded in HMD","authors":"Takayuki Kameoka, Yuki Kon, Takuto Nakamura, H. Kajimoto","doi":"10.1145/3267782.3267789","DOIUrl":"https://doi.org/10.1145/3267782.3267789","url":null,"abstract":"Along with the spread of VR experiences by HMD, many proposals have been made to improve the experience by providing tactile information to the fingertip, but there are problems such as difficulty in attaching and detaching and hindering free movement of fingers. As a method to solve these issues, we developed Haptopus, which embeds the tactile display in the HMD and presents tactile sense associated with fingers to the face. In this paper, we conducted a preliminary investigation on the best suction pressure and compared with the conventional tactile presentation approaches. As a result, it was confirmed that Haptopus improves the quality of the VR experience.","PeriodicalId":126671,"journal":{"name":"Proceedings of the 2018 ACM Symposium on Spatial User Interaction","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115876300","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
B. Dewitz, Philipp Ladwig, Frank Steinicke, C. Geiger
In the context of spatial user interfaces for virtual or augmented reality, many interaction techniques and metaphors are referred to as being (super-)natural, magical or hyper-real. However, many of these terms have not been defined properly, such that a classification and distinction between those interfaces is often not possible. We propose a new classification system which can be used to identify those interaction techniques and relate them to reality-based and abstract interaction techniques.
{"title":"Classification of Beyond-Reality Interaction Techniques in Spatial Human-Computer Interaction","authors":"B. Dewitz, Philipp Ladwig, Frank Steinicke, C. Geiger","doi":"10.1145/3267782.3274680","DOIUrl":"https://doi.org/10.1145/3267782.3274680","url":null,"abstract":"In the context of spatial user interfaces for virtual or augmented reality, many interaction techniques and metaphors are referred to as being (super-)natural, magical or hyper-real. However, many of these terms have not been defined properly, such that a classification and distinction between those interfaces is often not possible. We propose a new classification system which can be used to identify those interaction techniques and relate them to reality-based and abstract interaction techniques.","PeriodicalId":126671,"journal":{"name":"Proceedings of the 2018 ACM Symposium on Spatial User Interaction","volume":"95 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129090690","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Dennis Schüsselbauer, Andreas Schmid, Raphael Wimmer, Laurin Muth
We demonstrate a simple technique that allows tangible objects to track their own position on a surface using an off-the-shelf optical mouse sensor. In addition to measuring the (relative) movement of the device, the sensor also allows capturing a low-resolution raw image of the surface. This makes it possible to detect the absolute position of the device via marker patterns at known positions. Knowing the absolute position may either be used to trigger actions or as a known reference point for tracking the device. This demo allows users to explore and evaluate affordances and applications of such tangibles.
{"title":"Spatially-Aware Tangibles Using Mouse Sensors","authors":"Dennis Schüsselbauer, Andreas Schmid, Raphael Wimmer, Laurin Muth","doi":"10.1145/3267782.3274690","DOIUrl":"https://doi.org/10.1145/3267782.3274690","url":null,"abstract":"We demonstrate a simple technique that allows tangible objects to track their own position on a surface using an off-the-shelf optical mouse sensor. In addition to measuring the (relative) movement of the device, the sensor also allows capturing a low-resolution raw image of the surface. This makes it possible to detect the absolute position of the device via marker patterns at known positions. Knowing the absolute position may either be used to trigger actions or as a known reference point for tracking the device. This demo allows users to explore and evaluate affordances and applications of such tangibles.","PeriodicalId":126671,"journal":{"name":"Proceedings of the 2018 ACM Symposium on Spatial User Interaction","volume":"26 6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124485034","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Roger Boldu, Alexandru Dancu, Denys J. C. Matthies, Pablo Gallego Cascón, Shanaka Ransiri, Suranga Nanayakkara
Spatial User Interfaces, such as wearable fitness trackers are widely used to monitor and improve athletic performance. However, most fitness tracker interfaces require bimanual interactions, which significantly impacts the user's gait and pace. This paper evaluated a one-handed thumb-to-ring gesture interface to quickly access information without interfering with physical activity, such as running. By a pilot study, the most minimal gesture set was selected, particularly those that could be executed reflexively to minimize distraction and cognitive load. The evaluation revealed that among the selected gestures, the tap, swipe-down, and swipe-left were the most 'easy to use'. Interestingly, motion does not have a significant effect on the ease of use or on the execution time. However, interacting in motion was subjectively rated as more demanding. Finally, the gesture set was evaluated in real-world applications, while the user performed a running exercise and simultaneously controlled a lap timer, a distance counter, and a music player.
{"title":"Thumb-In-Motion: Evaluating Thumb-to-Ring Microgestures for Athletic Activity","authors":"Roger Boldu, Alexandru Dancu, Denys J. C. Matthies, Pablo Gallego Cascón, Shanaka Ransiri, Suranga Nanayakkara","doi":"10.1145/3267782.3267796","DOIUrl":"https://doi.org/10.1145/3267782.3267796","url":null,"abstract":"Spatial User Interfaces, such as wearable fitness trackers are widely used to monitor and improve athletic performance. However, most fitness tracker interfaces require bimanual interactions, which significantly impacts the user's gait and pace. This paper evaluated a one-handed thumb-to-ring gesture interface to quickly access information without interfering with physical activity, such as running. By a pilot study, the most minimal gesture set was selected, particularly those that could be executed reflexively to minimize distraction and cognitive load. The evaluation revealed that among the selected gestures, the tap, swipe-down, and swipe-left were the most 'easy to use'. Interestingly, motion does not have a significant effect on the ease of use or on the execution time. However, interacting in motion was subjectively rated as more demanding. Finally, the gesture set was evaluated in real-world applications, while the user performed a running exercise and simultaneously controlled a lap timer, a distance counter, and a music player.","PeriodicalId":126671,"journal":{"name":"Proceedings of the 2018 ACM Symposium on Spatial User Interaction","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133543316","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Object selection in a head-mounted display system has been studied extensively. Although most previous work indicates that users perform better when selecting with minimum offset added to the cursor, it is often not possible to directly select objects that are out of arm's reach. Thus, it is not clear whether offset-based techniques will result in improved overall performance. Moreover, due to the difference in muscle requirements of arm and shoulder between a hand-held device and a motion capture device, selection performance may be affected by factors related to ergonomics of the input device. In order to explore these uncertainties, we conduct a user study to evaluate the effects of four virtual cursor offset techniques on 3D object selection performance using Fitts' model and ISO 9241-9 standard while comparing two input devices in a head-mounted display. The results show that selection with No Offset is most efficient when the target is within reach. When the target is out of reach, Linear Offset outperforms Fixed-Length Offset and Go-Go Offset on movement time, error rate and effective throughput, as well as subjective preference evaluation. Overall, the Razer Hydra controller provides better and more stable selection performance than Leap Motion.
{"title":"Evaluation of Cursor Offset on 3D Selection in VR","authors":"Jialei Li, Isaac Cho, Z. Wartell","doi":"10.1145/3267782.3267797","DOIUrl":"https://doi.org/10.1145/3267782.3267797","url":null,"abstract":"Object selection in a head-mounted display system has been studied extensively. Although most previous work indicates that users perform better when selecting with minimum offset added to the cursor, it is often not possible to directly select objects that are out of arm's reach. Thus, it is not clear whether offset-based techniques will result in improved overall performance. Moreover, due to the difference in muscle requirements of arm and shoulder between a hand-held device and a motion capture device, selection performance may be affected by factors related to ergonomics of the input device. In order to explore these uncertainties, we conduct a user study to evaluate the effects of four virtual cursor offset techniques on 3D object selection performance using Fitts' model and ISO 9241-9 standard while comparing two input devices in a head-mounted display. The results show that selection with No Offset is most efficient when the target is within reach. When the target is out of reach, Linear Offset outperforms Fixed-Length Offset and Go-Go Offset on movement time, error rate and effective throughput, as well as subjective preference evaluation. Overall, the Razer Hydra controller provides better and more stable selection performance than Leap Motion.","PeriodicalId":126671,"journal":{"name":"Proceedings of the 2018 ACM Symposium on Spatial User Interaction","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129486541","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}