Hyperspectral imaging, as a fast and cost effective method of mapping the composition of geological materials in context, is a key enabler for scientific discoveries in the geosciences. Being able to do this in-situ in real world context, possibly in real time, would have profound implications for geology and minerals exploration. This work addresses this important issue by developing an augmented reality application called HypAR that enables in-situ, interactive exploration of mineralogy spatially co-located and embedded with rock surfaces. User centred design is deployed to assure the utility and validity of the system. We describe the requirements analysis and design process for HypAR. We present a prototype using the Microsoft HoloLens that was implemented for a rock wall containing a wide range of minerals and materials from significant geological localities of Western Australia. We briefly discuss several use cases for which HypAR and extensions thereof may prove useful to geoscientists and other end users who have to make effective, informed decisions about the mineralogy of rock surfaces.
{"title":"HypAR: Situated Mineralogy Exploration in Augmented Reality","authors":"U. Engelke, Casey Rogers, J. Klump, I. Lau","doi":"10.1145/3359997.3365715","DOIUrl":"https://doi.org/10.1145/3359997.3365715","url":null,"abstract":"Hyperspectral imaging, as a fast and cost effective method of mapping the composition of geological materials in context, is a key enabler for scientific discoveries in the geosciences. Being able to do this in-situ in real world context, possibly in real time, would have profound implications for geology and minerals exploration. This work addresses this important issue by developing an augmented reality application called HypAR that enables in-situ, interactive exploration of mineralogy spatially co-located and embedded with rock surfaces. User centred design is deployed to assure the utility and validity of the system. We describe the requirements analysis and design process for HypAR. We present a prototype using the Microsoft HoloLens that was implemented for a rock wall containing a wide range of minerals and materials from significant geological localities of Western Australia. We briefly discuss several use cases for which HypAR and extensions thereof may prove useful to geoscientists and other end users who have to make effective, informed decisions about the mineralogy of rock surfaces.","PeriodicalId":448139,"journal":{"name":"Proceedings of the 17th International Conference on Virtual-Reality Continuum and its Applications in Industry","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115898551","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The combination of room-scale virtual reality and non-isometric virtual walking techniques is promising-the former provides a comfortable and natural VR experience, while the latter relaxes the constraint of the physical space surrounding the user. In the last few decades, many non-isometric virtual walking techniques have been proposed to enable unconstrained walking without disrupting the sense of presence in the VR environment. Nevertheless, many works reported the occurrence of VR sickness near the detection threshold or after prolonged use. There exists a knowledge gap on the level of VR sickness and gait performance for amplified non-isometric virtual walking at well beyond the detection threshold. This paper presents an experiment with 17 participants that investigated VR sickness and gait parameters during non-isometric virtual walking at large and detectable translational gain levels. The result showed that the translational gain level had a significant effect on the reported sickness score, gait parameters, and center of mass displacements. Surprisingly, participants who did not experience motion sickness symptoms at the end of the experiment adapted to the non-isometric virtual walking well and even showed improved performance at a large gain level of 10x.
{"title":"Analysis of VR Sickness and Gait Parameters During Non-Isometric Virtual Walking with Large Translational Gain","authors":"C. A. T. Cortes, Hsiang-Ting Chen, Chin-Teng Lin","doi":"10.1145/3359997.3365694","DOIUrl":"https://doi.org/10.1145/3359997.3365694","url":null,"abstract":"The combination of room-scale virtual reality and non-isometric virtual walking techniques is promising-the former provides a comfortable and natural VR experience, while the latter relaxes the constraint of the physical space surrounding the user. In the last few decades, many non-isometric virtual walking techniques have been proposed to enable unconstrained walking without disrupting the sense of presence in the VR environment. Nevertheless, many works reported the occurrence of VR sickness near the detection threshold or after prolonged use. There exists a knowledge gap on the level of VR sickness and gait performance for amplified non-isometric virtual walking at well beyond the detection threshold. This paper presents an experiment with 17 participants that investigated VR sickness and gait parameters during non-isometric virtual walking at large and detectable translational gain levels. The result showed that the translational gain level had a significant effect on the reported sickness score, gait parameters, and center of mass displacements. Surprisingly, participants who did not experience motion sickness symptoms at the end of the experiment adapted to the non-isometric virtual walking well and even showed improved performance at a large gain level of 10x.","PeriodicalId":448139,"journal":{"name":"Proceedings of the 17th International Conference on Virtual-Reality Continuum and its Applications in Industry","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127211638","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We propose an approach for real-time insertion of virtual objects into pre-recorded moving-camera 360° video. First, we reconstruct camera motion and sparse scene content via structure from motion on stitched equirectangular video. Then, to plausibly reproduce real-world lighting conditions for virtual objects, we use inverse tone mapping to recover high dynamic range environment maps which vary spatially along the camera path. We implement our approach into the Unity rendering engine for real-time virtual object insertion via differential rendering, with dynamic lighting, image-based shadowing, and user interaction. This expands the use and flexibility of 360° video for interactive computer graphics and visual effects applications.
{"title":"Real-time Virtual Object Insertion for Moving 360° Videos","authors":"J. Tarko, J. Tompkin, Christian Richardt","doi":"10.1145/3359997.3365708","DOIUrl":"https://doi.org/10.1145/3359997.3365708","url":null,"abstract":"We propose an approach for real-time insertion of virtual objects into pre-recorded moving-camera 360° video. First, we reconstruct camera motion and sparse scene content via structure from motion on stitched equirectangular video. Then, to plausibly reproduce real-world lighting conditions for virtual objects, we use inverse tone mapping to recover high dynamic range environment maps which vary spatially along the camera path. We implement our approach into the Unity rendering engine for real-time virtual object insertion via differential rendering, with dynamic lighting, image-based shadowing, and user interaction. This expands the use and flexibility of 360° video for interactive computer graphics and visual effects applications.","PeriodicalId":448139,"journal":{"name":"Proceedings of the 17th International Conference on Virtual-Reality Continuum and its Applications in Industry","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129624162","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pongsagon Vichitvejpaisal, Natchaya Porwongsawang, P. Ingpochai
Relive History VR project provides the users with the high detailed 3D scan of the World Heritage site of the Ayutthaya historical park in Thailand. It brings together large scale 3D scanning technology, VR, educational and virtual tourism based experience to help a user experience heritage sites in a new and innovative way. This main contribution of this project is the ability of a user to go back in time to see and experience a particular site and the culture associated with it. The reconstruction of the original structures was modeled based on the studies from historical researchers. The reconstruction model can be shown transparently overlaid on top of the scanned model of the site current state to better compare the architectural detail. Users can interact with the ancient people to complete some given missions and learn the old cultures along the way. We expected that our audiences would enjoy the experience, learn the value of the world heritage site and help to preserve them
{"title":"Relive History: VR time travel at the world heritage site","authors":"Pongsagon Vichitvejpaisal, Natchaya Porwongsawang, P. Ingpochai","doi":"10.1145/3359997.3365733","DOIUrl":"https://doi.org/10.1145/3359997.3365733","url":null,"abstract":"Relive History VR project provides the users with the high detailed 3D scan of the World Heritage site of the Ayutthaya historical park in Thailand. It brings together large scale 3D scanning technology, VR, educational and virtual tourism based experience to help a user experience heritage sites in a new and innovative way. This main contribution of this project is the ability of a user to go back in time to see and experience a particular site and the culture associated with it. The reconstruction of the original structures was modeled based on the studies from historical researchers. The reconstruction model can be shown transparently overlaid on top of the scanned model of the site current state to better compare the architectural detail. Users can interact with the ancient people to complete some given missions and learn the old cultures along the way. We expected that our audiences would enjoy the experience, learn the value of the world heritage site and help to preserve them","PeriodicalId":448139,"journal":{"name":"Proceedings of the 17th International Conference on Virtual-Reality Continuum and its Applications in Industry","volume":"55 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126335457","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In virtual reality (VR), a new language of sound design is emerging. As directors grapple to find solutions to some of the inherent problems of telling a story in VR—for instance, the audience's ability to control the field of view—sound designers are playing a new role in subconsciously guiding the audience's attention and consequently, are framing the narrative. However, developing a new language of sound design requires time for creative experimentation, and in direct opposition to this, a typical VR workflow often features compressed project timelines, software difficulties, and budgetary constraints. Turning to VR sound research offers little guidance to sound designers, where decades of research has focused on high fidelity and realistic sound representation in the name of presence and uninterrupted immersion [McRoberts, 2018], largely ignoring the potential contribution of cinematic sound design practices that use creative sound to guide an audience's emotion. Angela McArthur, Rebecca Stewart, and Mark Sandler go as far as to argue that unrealistic and creative sound design may be crucial for an audience's emotional engagement in virtual reality [McArthur et al., 2017]. To make a contribution towards the new language of sound for VR, and with reference to the literature, this practice-led research explores cinematic sound practices and principles within 360-film through the production of a 5-minute 360-film entitled Afraid of the Dark. The research is supported by a contextual survey including unpublished interviews with the sound designers of three 360-films that had the budget and time to experiment with cinematic sound practices - namely, “Under the Canopy” with sound design by Joel Douek, “My Africa” with sound design by Roland Heap, and Emmy award-winning “Collisions” with sound design by Oscar-nominated Tom Myers from Skywalker Sound. Additional insights are included from an unpublished interview with an experienced team of 360-film sound designers from “Cutting Edge” in Brisbane Australia – Mike Lange, Michael Thomas and Heath Plumb. The findings detail the benefits of thinking about sound from the beginning of pre-production, the practical considerations of on-set sound recording, and differing approaches to realistic representation and creative design for documentary in the sound studio. Additionally, the research contributes a low-budget workflow for creating spatial sound for 360-film as well as a template for an ambisonic location sound report.
在虚拟现实(VR)中,一种新的声音设计语言正在出现。当导演努力寻找解决vr故事中一些固有问题的方法时——例如,观众控制视野的能力——声音设计师在潜意识地引导观众的注意力,从而构建叙事中扮演着新的角色。然而,开发一种新的声音设计语言需要时间进行创造性实验,与此相反,典型的VR工作流程通常具有压缩的项目时间表,软件困难和预算限制。转向VR声音研究并不能为声音设计师提供多少指导,因为数十年来的研究都集中在高保真度和逼真的声音表现上,以存在和不受干扰的沉浸感为名义[McRoberts, 2018],很大程度上忽略了电影声音设计实践的潜在贡献,即使用创造性声音来引导观众的情感。Angela McArthur, Rebecca Stewart和Mark Sandler甚至认为,不现实和创造性的声音设计可能对用户在虚拟现实中的情感参与至关重要[McArthur等人,2017]。为了为VR的新声音语言做出贡献,并参考文献,这项以实践为主导的研究通过制作一部名为《害怕黑暗》的5分钟360电影,探索了360电影中的电影声音实践和原则。这项研究得到了一项背景调查的支持,其中包括对三部360度电影音效设计师的未发表采访,这些电影有预算和时间来试验电影音效实践——即,乔尔·杜克(Joel Douek)的声音设计《树荫下》(Under The Canopy),罗兰·希普(Roland Heap)的声音设计《我的非洲》(My Africa),以及获得艾美奖的《碰撞》(Collisions),音效设计来自天行者音效公司(Skywalker sound)的奥斯卡提名汤姆·迈尔斯(Tom Myers)。本文还采访了来自澳大利亚布里斯班的一个经验丰富的360电影音效设计师团队,他们是Mike Lange, Michael Thomas和Heath Plumb。研究结果详细说明了从前期制作开始就考虑声音的好处,现场录音的实际考虑,以及在录音棚中对纪录片的现实表现和创意设计的不同方法。此外,该研究还为360度电影的空间声音制作提供了低成本的工作流程,并为双声位置声音报告提供了模板。
{"title":"Beyond Reality","authors":"Alicia Eames","doi":"10.1145/3359997.3365736","DOIUrl":"https://doi.org/10.1145/3359997.3365736","url":null,"abstract":"In virtual reality (VR), a new language of sound design is emerging. As directors grapple to find solutions to some of the inherent problems of telling a story in VR—for instance, the audience's ability to control the field of view—sound designers are playing a new role in subconsciously guiding the audience's attention and consequently, are framing the narrative. However, developing a new language of sound design requires time for creative experimentation, and in direct opposition to this, a typical VR workflow often features compressed project timelines, software difficulties, and budgetary constraints. Turning to VR sound research offers little guidance to sound designers, where decades of research has focused on high fidelity and realistic sound representation in the name of presence and uninterrupted immersion [McRoberts, 2018], largely ignoring the potential contribution of cinematic sound design practices that use creative sound to guide an audience's emotion. Angela McArthur, Rebecca Stewart, and Mark Sandler go as far as to argue that unrealistic and creative sound design may be crucial for an audience's emotional engagement in virtual reality [McArthur et al., 2017]. To make a contribution towards the new language of sound for VR, and with reference to the literature, this practice-led research explores cinematic sound practices and principles within 360-film through the production of a 5-minute 360-film entitled Afraid of the Dark. The research is supported by a contextual survey including unpublished interviews with the sound designers of three 360-films that had the budget and time to experiment with cinematic sound practices - namely, “Under the Canopy” with sound design by Joel Douek, “My Africa” with sound design by Roland Heap, and Emmy award-winning “Collisions” with sound design by Oscar-nominated Tom Myers from Skywalker Sound. Additional insights are included from an unpublished interview with an experienced team of 360-film sound designers from “Cutting Edge” in Brisbane Australia – Mike Lange, Michael Thomas and Heath Plumb. The findings detail the benefits of thinking about sound from the beginning of pre-production, the practical considerations of on-set sound recording, and differing approaches to realistic representation and creative design for documentary in the sound studio. Additionally, the research contributes a low-budget workflow for creating spatial sound for 360-film as well as a template for an ambisonic location sound report.","PeriodicalId":448139,"journal":{"name":"Proceedings of the 17th International Conference on Virtual-Reality Continuum and its Applications in Industry","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128125308","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
T. Bednarz, Michael Tobia, Huyen Nguyen, D. Branchaud
This project aimed to employ multi-sensory visual analytics to a Computational Fluid Dynamics (CFD) dataset using augmented and mixed reality interfaces. Initial application was developed for a Hololens which allows users to interact with the CFD data using gestures, enabling control over the position, rotation and scale of the data, sampling, as well as voice commands that provide a range of functionalities such as changing a parameter or render a different view. This project leads to a more engaging and immersive experience of data analysis, generalised for CFD simulations. The application is also able to explore CFD datasets in fully collaborative ways, allowing engineers, scientists, and end-users to understand the underlying physics and behaviour of fluid flows together.
{"title":"Immersive Analytics using Augmented Reality for Computational Fluid Dynamics Simulations","authors":"T. Bednarz, Michael Tobia, Huyen Nguyen, D. Branchaud","doi":"10.1145/3359997.3365735","DOIUrl":"https://doi.org/10.1145/3359997.3365735","url":null,"abstract":"This project aimed to employ multi-sensory visual analytics to a Computational Fluid Dynamics (CFD) dataset using augmented and mixed reality interfaces. Initial application was developed for a Hololens which allows users to interact with the CFD data using gestures, enabling control over the position, rotation and scale of the data, sampling, as well as voice commands that provide a range of functionalities such as changing a parameter or render a different view. This project leads to a more engaging and immersive experience of data analysis, generalised for CFD simulations. The application is also able to explore CFD datasets in fully collaborative ways, allowing engineers, scientists, and end-users to understand the underlying physics and behaviour of fluid flows together.","PeriodicalId":448139,"journal":{"name":"Proceedings of the 17th International Conference on Virtual-Reality Continuum and its Applications in Industry","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114770268","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Nearly ubiquitous smartphone use invites research and development of augmented reality experiences promoting knowledge and understanding. However, there is a lack of design science research dissemination about developing these solutions. This paper adds to the information systems body of knowledge by presenting the second iteration of Design Science Research Methodology artefact and the process of its development in the form of a gamified place experience application about indigenous art, focusing on the optimization of AR integration and user interface enhancements. In testing the usability, we illustrate how the application was optimized for successful outcomes. The qualitative analysis results revealed the high level of usability of the mobile application leading to further testing of efficacy in creating Sense of Place where the art is curated and displayed.
{"title":"Optimizing Augmented Reality Outcomes in a Gamified Place Experience Application through Design Science Research","authors":"Nikolche Vasilevski, James R. Birt","doi":"10.1145/3359997.3365747","DOIUrl":"https://doi.org/10.1145/3359997.3365747","url":null,"abstract":"Nearly ubiquitous smartphone use invites research and development of augmented reality experiences promoting knowledge and understanding. However, there is a lack of design science research dissemination about developing these solutions. This paper adds to the information systems body of knowledge by presenting the second iteration of Design Science Research Methodology artefact and the process of its development in the form of a gamified place experience application about indigenous art, focusing on the optimization of AR integration and user interface enhancements. In testing the usability, we illustrate how the application was optimized for successful outcomes. The qualitative analysis results revealed the high level of usability of the mobile application leading to further testing of efficacy in creating Sense of Place where the art is curated and displayed.","PeriodicalId":448139,"journal":{"name":"Proceedings of the 17th International Conference on Virtual-Reality Continuum and its Applications in Industry","volume":"62 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132417438","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mitchell Norman, Gun A. Lee, Ross T. Smith, M. Billinghurst
Research has shown that Mixed Presence (MP) systems are a valuable collaboration tool. However current research into MP systems is limited to a handful of tabletop and Virtual Reality (VR) systems with no exploration of Head-Mounted Display (HMD) based Mixed Reality (MR) solutions. In this paper, we present a prototype HMD based MR Mixed Presence system we designed and developed. We conducted a user study to investigate how different role assignment affects coordination and engagement in a group task with two local users using MR HMDs, and one remote user with a desktop-based Augmented Reality (AR) interface. The results indicated that the role of coordinator significantly increased the remote user’s engagement with increased usage of visual communication cues. This is further supported by the mental effort and perceived dominance reported by users. Feedback from users also indicated visual communication cues are valuable for remote users in MP systems.
{"title":"The Impact of Remote User’s Role in a Mixed Reality Mixed Presence System","authors":"Mitchell Norman, Gun A. Lee, Ross T. Smith, M. Billinghurst","doi":"10.1145/3359997.3365691","DOIUrl":"https://doi.org/10.1145/3359997.3365691","url":null,"abstract":"Research has shown that Mixed Presence (MP) systems are a valuable collaboration tool. However current research into MP systems is limited to a handful of tabletop and Virtual Reality (VR) systems with no exploration of Head-Mounted Display (HMD) based Mixed Reality (MR) solutions. In this paper, we present a prototype HMD based MR Mixed Presence system we designed and developed. We conducted a user study to investigate how different role assignment affects coordination and engagement in a group task with two local users using MR HMDs, and one remote user with a desktop-based Augmented Reality (AR) interface. The results indicated that the role of coordinator significantly increased the remote user’s engagement with increased usage of visual communication cues. This is further supported by the mental effort and perceived dominance reported by users. Feedback from users also indicated visual communication cues are valuable for remote users in MP systems.","PeriodicalId":448139,"journal":{"name":"Proceedings of the 17th International Conference on Virtual-Reality Continuum and its Applications in Industry","volume":"281 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116225120","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Naturalistic tactile sensations can be elicited by mechanical stimuli because mechanical stimulations reproduce natural physical phenomena. However, a mechanical stimulation that is too strong may cause injury. Although electrical stimulation can elicit strong tactile sensations without damaging the skin, it is inferior in terms of naturalness. We propose and validate a haptic method for presenting naturalistic and intense sensations by combining electrical and mechanical stimulation. We validate the proposed method by verifying whether both enhancement of the subjective strength of mechanical stimulation through electrical stimulation and elimination of the bizarre sensation of electrical stimulation through mechanical stimulation can be achieved. We find that the proposed method increases subjective intensity by 25% on average across participants compared with mechanical stimulus alone and decreases the bizarre sensation compared with the presentation of the electrical stimulus alone. The method can be used to enhance the experience of virtual-reality content but has room for improvement especially in terms of intensity enhancement.
{"title":"Combination of Mechanical and Electrical Stimulation for an Intense and Realistic Tactile Sensation","authors":"R. Mizuhara, Akifumi Takahashi, H. Kajimoto","doi":"10.1145/3359997.3365714","DOIUrl":"https://doi.org/10.1145/3359997.3365714","url":null,"abstract":"Naturalistic tactile sensations can be elicited by mechanical stimuli because mechanical stimulations reproduce natural physical phenomena. However, a mechanical stimulation that is too strong may cause injury. Although electrical stimulation can elicit strong tactile sensations without damaging the skin, it is inferior in terms of naturalness. We propose and validate a haptic method for presenting naturalistic and intense sensations by combining electrical and mechanical stimulation. We validate the proposed method by verifying whether both enhancement of the subjective strength of mechanical stimulation through electrical stimulation and elimination of the bizarre sensation of electrical stimulation through mechanical stimulation can be achieved. We find that the proposed method increases subjective intensity by 25% on average across participants compared with mechanical stimulus alone and decreases the bizarre sensation compared with the presentation of the electrical stimulus alone. The method can be used to enhance the experience of virtual-reality content but has room for improvement especially in terms of intensity enhancement.","PeriodicalId":448139,"journal":{"name":"Proceedings of the 17th International Conference on Virtual-Reality Continuum and its Applications in Industry","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127283689","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
D. Giunchi, Stuart James, Donald Degraen, A. Steed
Users within a Virtual Environment often need support designing the environment around them with the need to find relevant content while remaining immersed. We focus on the familiar sketch-based interaction to support the process of content placing and specifically investigate how interactions from a tablet or desktop translate into the virtual environment. To understand sketching interaction within a virtual environment, we compare different methods of sketch interaction, i.e., 3D mid-air sketching, 2D sketching on a virtual tablet, 2D sketching on a fixed virtual whiteboard, and 2D sketching on a real tablet. The user remains immersed within the environment and queries a database containing detailed 3D models and replace them into the virtual environment. Our results show that 3D mid-air sketching is considered to be a more intuitive method to search a collection of models; while the addition of physical devices creates confusion due to the complications of their inclusion within a virtual environment. While we pose our work as a retrieval problem for 3D models of chairs, our results are extendable to other sketching tasks for virtual environments.
{"title":"Mixing realities for sketch retrieval in Virtual Reality","authors":"D. Giunchi, Stuart James, Donald Degraen, A. Steed","doi":"10.1145/3359997.3365751","DOIUrl":"https://doi.org/10.1145/3359997.3365751","url":null,"abstract":"Users within a Virtual Environment often need support designing the environment around them with the need to find relevant content while remaining immersed. We focus on the familiar sketch-based interaction to support the process of content placing and specifically investigate how interactions from a tablet or desktop translate into the virtual environment. To understand sketching interaction within a virtual environment, we compare different methods of sketch interaction, i.e., 3D mid-air sketching, 2D sketching on a virtual tablet, 2D sketching on a fixed virtual whiteboard, and 2D sketching on a real tablet. The user remains immersed within the environment and queries a database containing detailed 3D models and replace them into the virtual environment. Our results show that 3D mid-air sketching is considered to be a more intuitive method to search a collection of models; while the addition of physical devices creates confusion due to the complications of their inclusion within a virtual environment. While we pose our work as a retrieval problem for 3D models of chairs, our results are extendable to other sketching tasks for virtual environments.","PeriodicalId":448139,"journal":{"name":"Proceedings of the 17th International Conference on Virtual-Reality Continuum and its Applications in Industry","volume":"70 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125445631","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}