In this paper, we introduce a curved QWERTY keyboard, bent spherically in front of the user, to facilitate 3D word-gesture text entry in immersive virtual environments. Using the curved keyboard, the number of candidate words in the 3D word-gesture text entry is reduced compared with that using a planar keyboard. In the pilot study, the text entry performance of the first author was 21.0 WPM (SD = 5.06), with a total error rate of 26.0% (SD = 15.2).
{"title":"Text Entry Method for Immersive Virtual Environments Using Curved Keyboard","authors":"Naoki Yanagihara, B. Shizuki, Shin Takahashi","doi":"10.1145/3359996.3365026","DOIUrl":"https://doi.org/10.1145/3359996.3365026","url":null,"abstract":"In this paper, we introduce a curved QWERTY keyboard, bent spherically in front of the user, to facilitate 3D word-gesture text entry in immersive virtual environments. Using the curved keyboard, the number of candidate words in the 3D word-gesture text entry is reduced compared with that using a planar keyboard. In the pilot study, the text entry performance of the first author was 21.0 WPM (SD = 5.06), with a total error rate of 26.0% (SD = 15.2).","PeriodicalId":393864,"journal":{"name":"Proceedings of the 25th ACM Symposium on Virtual Reality Software and Technology","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2019-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121599585","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this demo we showcase Layered Horizons, a Virtual Reality (VR) experience we have developed for use in an ARC-funded research project, Waves of Words: Mapping and Modelling Australia’s Pacific Past. This platform allows users to connect different geospatial datasets (for our purposes, from the humanities and social sciences) into layers that can then be explored by the use of natural gesture and body movement. This kind of interaction design in VR takes full advantage of the media’s affordances, without relying on metaphors from other interactive media, yet being familiar enough as to engender intuitive and meaningful use. We demonstrate how the platform is currently being used to connect linguistic data (word lists) with archaeological data (e.g. on the spread of bananas through the Asia-Pacific region, or canoe styles found in different locations) and anthropological data (e.g. shared cultural features like chieftainship systems or kinship systems). Taking into account what we also know about Pacific navigation and simulated canoe travel, we can therefore build a complex layered map of the region over time that allows us to better discover probable human migration and contact patterns.
{"title":"Layered Horizons: a Geospatial Humanities Research Platform","authors":"A. Burrell, Rachel Hendery","doi":"10.1145/3359996.3365052","DOIUrl":"https://doi.org/10.1145/3359996.3365052","url":null,"abstract":"In this demo we showcase Layered Horizons, a Virtual Reality (VR) experience we have developed for use in an ARC-funded research project, Waves of Words: Mapping and Modelling Australia’s Pacific Past. This platform allows users to connect different geospatial datasets (for our purposes, from the humanities and social sciences) into layers that can then be explored by the use of natural gesture and body movement. This kind of interaction design in VR takes full advantage of the media’s affordances, without relying on metaphors from other interactive media, yet being familiar enough as to engender intuitive and meaningful use. We demonstrate how the platform is currently being used to connect linguistic data (word lists) with archaeological data (e.g. on the spread of bananas through the Asia-Pacific region, or canoe styles found in different locations) and anthropological data (e.g. shared cultural features like chieftainship systems or kinship systems). Taking into account what we also know about Pacific navigation and simulated canoe travel, we can therefore build a complex layered map of the region over time that allows us to better discover probable human migration and contact patterns.","PeriodicalId":393864,"journal":{"name":"Proceedings of the 25th ACM Symposium on Virtual Reality Software and Technology","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2019-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131569493","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sanghyeon Gil, Yunji Seok, Kiyeol Park, Jaeseok Yoo, S. Chin
In practice, it is not easy to find a study of soap bubbles rendering by measuring the speed and thickness of soap bubbles flow. In this study, we have carried out to measure the change in the flow and thickness of a soap film to realistically render soap bubbles.
{"title":"Soap film flow and thickness for soap bubble rendering","authors":"Sanghyeon Gil, Yunji Seok, Kiyeol Park, Jaeseok Yoo, S. Chin","doi":"10.1145/3359996.3364716","DOIUrl":"https://doi.org/10.1145/3359996.3364716","url":null,"abstract":"In practice, it is not easy to find a study of soap bubbles rendering by measuring the speed and thickness of soap bubbles flow. In this study, we have carried out to measure the change in the flow and thickness of a soap film to realistically render soap bubbles.","PeriodicalId":393864,"journal":{"name":"Proceedings of the 25th ACM Symposium on Virtual Reality Software and Technology","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2019-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131919590","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Feng Liang, Stevanus Kevin, K. Kunze, Yun Suen Pai
We propose PanoFlex, an adaptive method for projecting panoramic vision using a dynamic distortion method based on eye gaze. We stream real-time video from a 360° camera and project the view on a plane to the user. The user controls the distortion of this equirectangular projection using eye gaze. For our first user study, we compare our method with conventional equirectangular projection considering the impact on spatial perception. For our second study, we perform a simulator sickness evaluation when the user performs regular daily activities. We found that PanoFlex did not carry any significant negative impact towards the user’s spatial perception, perceived task load, and simulator sickness compared to the more conventional equirectangular view.
{"title":"PanoFlex: Adaptive Panoramic Vision to Accommodate 360° Field-of-View for Humans","authors":"Feng Liang, Stevanus Kevin, K. Kunze, Yun Suen Pai","doi":"10.1145/3359996.3364767","DOIUrl":"https://doi.org/10.1145/3359996.3364767","url":null,"abstract":"We propose PanoFlex, an adaptive method for projecting panoramic vision using a dynamic distortion method based on eye gaze. We stream real-time video from a 360° camera and project the view on a plane to the user. The user controls the distortion of this equirectangular projection using eye gaze. For our first user study, we compare our method with conventional equirectangular projection considering the impact on spatial perception. For our second study, we perform a simulator sickness evaluation when the user performs regular daily activities. We found that PanoFlex did not carry any significant negative impact towards the user’s spatial perception, perceived task load, and simulator sickness compared to the more conventional equirectangular view.","PeriodicalId":393864,"journal":{"name":"Proceedings of the 25th ACM Symposium on Virtual Reality Software and Technology","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2019-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129599696","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We describe a prototype of the virtual reality remote classroom participation system called UniVResity. UniVResity mirrors in virtual reality the ongoing face-to-face classroom activities, taking into account potentially low bandwidth data connection and lack of VR equipment in class. Our system attempts to combine the benefits of online education and face-to-face education, and makes face-to-face learning more accessible.
{"title":"UniVResity: Face-to-Face Class Participation for Remote Students using Virtual Reality","authors":"Krzysztof Pietroszek, Chao-Cheng Lin","doi":"10.1145/3359996.3364730","DOIUrl":"https://doi.org/10.1145/3359996.3364730","url":null,"abstract":"We describe a prototype of the virtual reality remote classroom participation system called UniVResity. UniVResity mirrors in virtual reality the ongoing face-to-face classroom activities, taking into account potentially low bandwidth data connection and lack of VR equipment in class. Our system attempts to combine the benefits of online education and face-to-face education, and makes face-to-face learning more accessible.","PeriodicalId":393864,"journal":{"name":"Proceedings of the 25th ACM Symposium on Virtual Reality Software and Technology","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2019-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129964897","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Controllers are currently the typical input device for commercial Virtual Reality (VR) systems. Yet, such controllers are not as efficient as other devices, including the mouse. This motivates us to investigate devices that substantially exceed the controller’s performance, for both VR and Augmented Reality (AR) systems. We performed a user study to compare several input devices, including a mouse, controller, and a 3D pen-like device on a VR and AR pointing task. Our results show that the 3D pen significantly outperforms modern VR controllers in all evaluated measures and that it is comparable to the mouse. Participants also liked the 3D pen more than the controller. Finally, we show how 3D pen devices could be integrated into today’s VR and AR systems.
{"title":"Is the Pen Mightier than the Controller? A Comparison of Input Devices for Selection in Virtual and Augmented Reality","authors":"Duc-Minh Pham, W. Stuerzlinger","doi":"10.1145/3359996.3364264","DOIUrl":"https://doi.org/10.1145/3359996.3364264","url":null,"abstract":"Controllers are currently the typical input device for commercial Virtual Reality (VR) systems. Yet, such controllers are not as efficient as other devices, including the mouse. This motivates us to investigate devices that substantially exceed the controller’s performance, for both VR and Augmented Reality (AR) systems. We performed a user study to compare several input devices, including a mouse, controller, and a 3D pen-like device on a VR and AR pointing task. Our results show that the 3D pen significantly outperforms modern VR controllers in all evaluated measures and that it is comparable to the mouse. Participants also liked the 3D pen more than the controller. Finally, we show how 3D pen devices could be integrated into today’s VR and AR systems.","PeriodicalId":393864,"journal":{"name":"Proceedings of the 25th ACM Symposium on Virtual Reality Software and Technology","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2019-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131643818","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Animated virtual humans may rely on full-body tracking system to reproduce user motions. In this paper, we reduce tracking to the upper-body and reconstruct the lower body to follow autonomously its upper counterpart. Doing so reduces the number of sensors required, making the application of virtual humans simpler and cheaper. It also enable deployment in cluttered scenes where the lower body is often hidden. The contribution here is the inversion of the well-known capture problem for bipedal walking. It determines footsteps rather than center-of-mass motions and yet can be solved with an off-the-shelf capture problem solver. The quality of our method is assessed in real-time tracking experiments on a wide variety of movements.
{"title":"Lower body control of a semi-autonomous avatar in Virtual Reality: Balance and Locomotion of a 3D Bipedal Model","authors":"Vincent Thomasset, Stéphane Caron, V. Weistroffer","doi":"10.1145/3359996.3364240","DOIUrl":"https://doi.org/10.1145/3359996.3364240","url":null,"abstract":"Animated virtual humans may rely on full-body tracking system to reproduce user motions. In this paper, we reduce tracking to the upper-body and reconstruct the lower body to follow autonomously its upper counterpart. Doing so reduces the number of sensors required, making the application of virtual humans simpler and cheaper. It also enable deployment in cluttered scenes where the lower body is often hidden. The contribution here is the inversion of the well-known capture problem for bipedal walking. It determines footsteps rather than center-of-mass motions and yet can be solved with an off-the-shelf capture problem solver. The quality of our method is assessed in real-time tracking experiments on a wide variety of movements.","PeriodicalId":393864,"journal":{"name":"Proceedings of the 25th ACM Symposium on Virtual Reality Software and Technology","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2019-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134465636","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Juan Heredia, Jonathan Tirado, Vladislav Panov, Miguel Altamirano Cabrera, K. Youcef-Toumi, D. Tsetserukou
Haptic devices have been employed to immerse users in VR environments. In particular, hand and finger haptic devices have been deeply developed. However, this type of devices occludes hand detection for some tracking systems, or, for some other tracking systems, it is uncomfortable for the users to wear two different devices (haptic and tracking device) on both hands. We introduce RecyGlide, a novel wearable multimodal display located at the forearm. The RecyGlide is composed of inverted five-bar linkages with 2 degrees of freedom (DoF) and vibration motors (see Fig. 1.(a). The device provides multimodal tactile feedback such as slippage, force vector, pressure, and vibration. We tested the discrimination ability of monomodal and multimodal stimuli patterns on the forearm and confirmed that the multimodal patterns have higher recognition rate. This haptic device was used in VR applications, and we proved that it enhances VR experience and makes it more interactive.
{"title":"RecyGlide : A Forearm-worn Multi-modal Haptic Display aimed to Improve User VR Immersion Submission","authors":"Juan Heredia, Jonathan Tirado, Vladislav Panov, Miguel Altamirano Cabrera, K. Youcef-Toumi, D. Tsetserukou","doi":"10.1145/3359996.3364759","DOIUrl":"https://doi.org/10.1145/3359996.3364759","url":null,"abstract":"Haptic devices have been employed to immerse users in VR environments. In particular, hand and finger haptic devices have been deeply developed. However, this type of devices occludes hand detection for some tracking systems, or, for some other tracking systems, it is uncomfortable for the users to wear two different devices (haptic and tracking device) on both hands. We introduce RecyGlide, a novel wearable multimodal display located at the forearm. The RecyGlide is composed of inverted five-bar linkages with 2 degrees of freedom (DoF) and vibration motors (see Fig. 1.(a). The device provides multimodal tactile feedback such as slippage, force vector, pressure, and vibration. We tested the discrimination ability of monomodal and multimodal stimuli patterns on the forearm and confirmed that the multimodal patterns have higher recognition rate. This haptic device was used in VR applications, and we proved that it enhances VR experience and makes it more interactive.","PeriodicalId":393864,"journal":{"name":"Proceedings of the 25th ACM Symposium on Virtual Reality Software and Technology","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2019-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129815437","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
While many metaphors were developed for interactions from a specific point at the reality-virtuality continuum, much less attention has been paid to designing metaphors that allow the users to cross the boundaries between the virtual, the augmented, and the real. We propose a use of an Inter-Reality Interactive Surface (IRIS) that enables users to collaborate across the reality-virtuality continuum within the same application. While we examine IRIS in the context of an immersive educational platform, UniVResity, the metaphor can be generalized to many other application domains.
{"title":"IRIS: Inter-Reality Interactive Surface","authors":"Krzysztof Pietroszek","doi":"10.1145/3359996.3364731","DOIUrl":"https://doi.org/10.1145/3359996.3364731","url":null,"abstract":"While many metaphors were developed for interactions from a specific point at the reality-virtuality continuum, much less attention has been paid to designing metaphors that allow the users to cross the boundaries between the virtual, the augmented, and the real. We propose a use of an Inter-Reality Interactive Surface (IRIS) that enables users to collaborate across the reality-virtuality continuum within the same application. While we examine IRIS in the context of an immersive educational platform, UniVResity, the metaphor can be generalized to many other application domains.","PeriodicalId":393864,"journal":{"name":"Proceedings of the 25th ACM Symposium on Virtual Reality Software and Technology","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2019-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114134968","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yuanjie Wu, Yu Wang, Sungchul Jung, S. Hoermann, R. Lindeman
To interact as fully-tracked avatars with rich hand gestures in Virtual Reality (VR), we often need to wear a tracking suit or attach extra sensors on our bodies. User experience and performance may be impacted by the cumbersome devices and low fidelity behavior representations, especially in social scenarios where good communication is required. In this paper, we use multiple depth sensors and focus on increasing the behavioral fidelity of a participant’s virtual body representation. To investigate the impact of the depth-sensor-based avatar system (full-body tracking with hand gestures), we compared it against a controller-based avatar system (partial-body tracking with limited hand gestures). We designed a VR interview simulation for a single user to measure the effects on presence, virtual body ownership, workload, usability, and perceived self-performance. Specifically, the interview process was recorded in VR, together with all the verbal and non-verbal cues. Subjects then took a third-person view to evaluate their previous performance. Our results show that the depth-sensor-based avatar control system increased virtual body ownership and also improved the user experience. In addition, users rated their non-verbal behavior performance higher in the full-body depth-sensor-based avatar system.
{"title":"Exploring the Use of a Robust Depth-sensor-based Avatar Control System and its Effects on Communication Behaviors","authors":"Yuanjie Wu, Yu Wang, Sungchul Jung, S. Hoermann, R. Lindeman","doi":"10.1145/3359996.3364267","DOIUrl":"https://doi.org/10.1145/3359996.3364267","url":null,"abstract":"To interact as fully-tracked avatars with rich hand gestures in Virtual Reality (VR), we often need to wear a tracking suit or attach extra sensors on our bodies. User experience and performance may be impacted by the cumbersome devices and low fidelity behavior representations, especially in social scenarios where good communication is required. In this paper, we use multiple depth sensors and focus on increasing the behavioral fidelity of a participant’s virtual body representation. To investigate the impact of the depth-sensor-based avatar system (full-body tracking with hand gestures), we compared it against a controller-based avatar system (partial-body tracking with limited hand gestures). We designed a VR interview simulation for a single user to measure the effects on presence, virtual body ownership, workload, usability, and perceived self-performance. Specifically, the interview process was recorded in VR, together with all the verbal and non-verbal cues. Subjects then took a third-person view to evaluate their previous performance. Our results show that the depth-sensor-based avatar control system increased virtual body ownership and also improved the user experience. In addition, users rated their non-verbal behavior performance higher in the full-body depth-sensor-based avatar system.","PeriodicalId":393864,"journal":{"name":"Proceedings of the 25th ACM Symposium on Virtual Reality Software and Technology","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2019-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132276088","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}