We developed a cubic keyboard to exploit the three-dimensional (3D) space of virtual reality (VR) environments. The user enters a word by drawing a stroke with the controller. The keyboard consists of 27 keys arranged in a 3 x 3 x 3 (vertical, horizontal, and depth) 3D array; all 26 letters of the alphabet are assigned to 26 keys; the center key is blank. The user moves the controller to the key of a letter of the word and then selects that key by slowing movement.
{"title":"Cubic Keyboard for Virtual Reality","authors":"Naoki Yanagihara, B. Shizuki","doi":"10.1145/3267782.3274687","DOIUrl":"https://doi.org/10.1145/3267782.3274687","url":null,"abstract":"We developed a cubic keyboard to exploit the three-dimensional (3D) space of virtual reality (VR) environments. The user enters a word by drawing a stroke with the controller. The keyboard consists of 27 keys arranged in a 3 x 3 x 3 (vertical, horizontal, and depth) 3D array; all 26 letters of the alphabet are assigned to 26 keys; the center key is blank. The user moves the controller to the key of a letter of the word and then selects that key by slowing movement.","PeriodicalId":126671,"journal":{"name":"Proceedings of the 2018 ACM Symposium on Spatial User Interaction","volume":"120 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127071926","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
D. Roth, David Mal, Christian Felix Purps, Peter Kullmann, Marc Erich Latoschik
Humans communicate to a large degree through nonverbal behavior. Nonverbal mimicry, i.e., the imitation of another's behavior can positively affect the social interactions. In virtual environments, user behavior can be replicated to avatars, and agent behaviors can be artificially constructed. By combining both, hybrid avatar-agent technologies aim at actively mediating virtual communication to foster interpersonal understanding and rapport. We present a naïve prototype, the "Mimicry Injector", that injects artificial mimicry in real-time virtual interactions. In an evaluation study, two participants were embodied in a Virtual Reality (VR) simulation, and had to perform a negotiation task. Their virtual characters either a) replicated only the original behavior or b) displayed the original behavior plus induced mimicry. We found that most participants did not detect the modification. However, the modification did not have a significant impact on the perception of the communication.
{"title":"Injecting Nonverbal Mimicry with Hybrid Avatar-Agent Technologies: A Naïve Approach","authors":"D. Roth, David Mal, Christian Felix Purps, Peter Kullmann, Marc Erich Latoschik","doi":"10.1145/3267782.3267791","DOIUrl":"https://doi.org/10.1145/3267782.3267791","url":null,"abstract":"Humans communicate to a large degree through nonverbal behavior. Nonverbal mimicry, i.e., the imitation of another's behavior can positively affect the social interactions. In virtual environments, user behavior can be replicated to avatars, and agent behaviors can be artificially constructed. By combining both, hybrid avatar-agent technologies aim at actively mediating virtual communication to foster interpersonal understanding and rapport. We present a naïve prototype, the \"Mimicry Injector\", that injects artificial mimicry in real-time virtual interactions. In an evaluation study, two participants were embodied in a Virtual Reality (VR) simulation, and had to perform a negotiation task. Their virtual characters either a) replicated only the original behavior or b) displayed the original behavior plus induced mimicry. We found that most participants did not detect the modification. However, the modification did not have a significant impact on the perception of the communication.","PeriodicalId":126671,"journal":{"name":"Proceedings of the 2018 ACM Symposium on Spatial User Interaction","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133873057","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Extended reality (XR) technology challenges practitioners to update the method of representation in art, which our laboratory has been working on as well [2]. Thus, in this demonstration, we present Air Maestros (AM), a multi-user audiovisual experience in mixed reality (MR) space using Microsoft HoloLens. The purpose of AM is to expand an ordinary music sequencer method into a three-dimensional (3D) space and a multi-user system. In this case, the users place 3D note objects into the MR space and, with a certain gesture, shoot a glowing ball at them. When their shots hit the 3D note objects, audiovisual effects appear at the objects' spatial positions.
{"title":"Air Maestros: A Multi-User Audiovisual Experience Using MR","authors":"Ryu Nakagawa, Ryo Komatsubara, Taku Ota, Hidefumi Ohmura","doi":"10.1145/3267782.3274685","DOIUrl":"https://doi.org/10.1145/3267782.3274685","url":null,"abstract":"Extended reality (XR) technology challenges practitioners to update the method of representation in art, which our laboratory has been working on as well [2]. Thus, in this demonstration, we present Air Maestros (AM), a multi-user audiovisual experience in mixed reality (MR) space using Microsoft HoloLens. The purpose of AM is to expand an ordinary music sequencer method into a three-dimensional (3D) space and a multi-user system. In this case, the users place 3D note objects into the MR space and, with a certain gesture, shoot a glowing ball at them. When their shots hit the 3D note objects, audiovisual effects appear at the objects' spatial positions.","PeriodicalId":126671,"journal":{"name":"Proceedings of the 2018 ACM Symposium on Spatial User Interaction","volume":"106 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114629236","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jamil Joundi, Peter D. Conradie, Jelle Saldien, L. Marez
This paper explores the possibilities of Virtual Reality (VR) as a tool for prototyping iterative design and development in the fashion industry. Subsequently, the system was evaluated by using two qualitative test protocols. Our results highlight how professional (fashion)designers view VR and what their expectations are.
{"title":"Exploring the Potential and Challenges of VR Prototyping in Fashion Design","authors":"Jamil Joundi, Peter D. Conradie, Jelle Saldien, L. Marez","doi":"10.1145/3267782.3274768","DOIUrl":"https://doi.org/10.1145/3267782.3274768","url":null,"abstract":"This paper explores the possibilities of Virtual Reality (VR) as a tool for prototyping iterative design and development in the fashion industry. Subsequently, the system was evaluated by using two qualitative test protocols. Our results highlight how professional (fashion)designers view VR and what their expectations are.","PeriodicalId":126671,"journal":{"name":"Proceedings of the 2018 ACM Symposium on Spatial User Interaction","volume":"125 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114186708","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This work investigated how a tracked, real golf club, used for high-fidelity passive haptic feedback in virtual reality, affected performance relative to using tracked controllers for a golf putting task. The primary hypothesis evaluated in this work was that overall accuracy would be improved through various inertial advantages in swinging a real club as well as additional alignment and comfort advantages from placing the putter on the floor. We also expected higher user preference for the technique and correlation with putting performance in the real environment. To evaluate these prospective advantages, a user study with a cross-over design was conducted with 20 participants from the local population. Results confirmed performance advantages as well as preference for the tracked golf club over the controller, but we were not able to confirm a correlation with real-world putting. Future work will investigate means to strengthen this aspect, while evaluating new research opportunities presented by study findings.
{"title":"Performance Benefits of High-Fidelity Passive Haptic Feedback in Virtual Reality Training","authors":"Anton Franzluebbers, K. Johnsen","doi":"10.1145/3267782.3267790","DOIUrl":"https://doi.org/10.1145/3267782.3267790","url":null,"abstract":"This work investigated how a tracked, real golf club, used for high-fidelity passive haptic feedback in virtual reality, affected performance relative to using tracked controllers for a golf putting task. The primary hypothesis evaluated in this work was that overall accuracy would be improved through various inertial advantages in swinging a real club as well as additional alignment and comfort advantages from placing the putter on the floor. We also expected higher user preference for the technique and correlation with putting performance in the real environment. To evaluate these prospective advantages, a user study with a cross-over design was conducted with 20 participants from the local population. Results confirmed performance advantages as well as preference for the tracked golf club over the controller, but we were not able to confirm a correlation with real-world putting. Future work will investigate means to strengthen this aspect, while evaluating new research opportunities presented by study findings.","PeriodicalId":126671,"journal":{"name":"Proceedings of the 2018 ACM Symposium on Spatial User Interaction","volume":"288 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121275351","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Our previous studies [1] have shown that electrical stimulation from the skin surface to the tendon region (Tendon Electrical Stimulation: TES) can elicit a force sensation. TES is thought to present a proprioceptive force sensation by stimulating receptors or sensory nerves responsible for recognizing the magnitude of the muscle contraction existing inside the tendon. While TES can induce such a force sensation, it has several limitations such as difficulty in presenting a momentary strong sensation, such as a collision. In order to address these issues and produce a more realistic and natural sensation, we constructed a multimodal presentation system that gives the users multiple cues about a situation where a force is applied to the user. In this study, we used TES as the part to present proprioception, vibration to present a cutaneous sensation, and a visual head mounted display (HMD) system to present simultaneous images (Figure 1). We adopted a damped sine wave vibration often used for the expression of collision [2]. Using this system, we investigated whether TES could contribute to the reproduction of haptics in a way that was similar to other modalities even if it is momentary collision situation. We also evaluated the efficacy of TES itself and that of the multimodal system involving TES.
{"title":"Haptic Interface Using Tendon Electrical Stimulation: Evaluation of the Effectiveness on Multimodal Presentation","authors":"Akifumi Takahashi, K. Tanabe, H. Kajimoto","doi":"10.1145/3267782.3274684","DOIUrl":"https://doi.org/10.1145/3267782.3274684","url":null,"abstract":"Our previous studies [1] have shown that electrical stimulation from the skin surface to the tendon region (Tendon Electrical Stimulation: TES) can elicit a force sensation. TES is thought to present a proprioceptive force sensation by stimulating receptors or sensory nerves responsible for recognizing the magnitude of the muscle contraction existing inside the tendon. While TES can induce such a force sensation, it has several limitations such as difficulty in presenting a momentary strong sensation, such as a collision. In order to address these issues and produce a more realistic and natural sensation, we constructed a multimodal presentation system that gives the users multiple cues about a situation where a force is applied to the user. In this study, we used TES as the part to present proprioception, vibration to present a cutaneous sensation, and a visual head mounted display (HMD) system to present simultaneous images (Figure 1). We adopted a damped sine wave vibration often used for the expression of collision [2]. Using this system, we investigated whether TES could contribute to the reproduction of haptics in a way that was similar to other modalities even if it is momentary collision situation. We also evaluated the efficacy of TES itself and that of the multimodal system involving TES.","PeriodicalId":126671,"journal":{"name":"Proceedings of the 2018 ACM Symposium on Spatial User Interaction","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122079100","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Proceedings of the 2018 ACM Symposium on Spatial User Interaction","authors":"","doi":"10.1145/3267782","DOIUrl":"https://doi.org/10.1145/3267782","url":null,"abstract":"","PeriodicalId":126671,"journal":{"name":"Proceedings of the 2018 ACM Symposium on Spatial User Interaction","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124174028","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}