Virtual Reality (VR) training games have many potential benefits for autism spectrum disorder (ASD) therapy, such as increasing motivation and improving the abilities of performing daily living activities. Persons with ASD often have deficits in hand-eye coordination, which makes many activities of daily living difficult. A VR game that trains hand-eye coordination could help users with ASD improve their quality of life. Moreover, incorporating users' interests into the game could be a good way to build a motivating game for users with ASD. We propose a Customizable Virtual Human (CVH) which enables users with ASD to easily customize a virtual human and then interact with the CVH in a 3D task. Specifically, we investigated the effects of CVHs with a VR hand-eye coordination training game - Imagination Soccer - and conducted a user study on adolescents with high functioning ASD. We compared the differences of participants' 3D interaction performances, game performances and user experiences (i.e. presence, involvement, and flow) under CVH and Non-customizable Virtual Human (NCVH) conditions. The results indicate that CVHs could effectively improve performance in 3D interaction tasks (i.e., blocking a soccer ball) for users with ASD, motivate them to play the game more, and offer a better user experience.
{"title":"\"I Built It!\" — Exploring the effects of customizable virtual humans on adolescents with ASD","authors":"Chao Mei, L. Mason, J. Quarles","doi":"10.1109/VR.2015.7223382","DOIUrl":"https://doi.org/10.1109/VR.2015.7223382","url":null,"abstract":"Virtual Reality (VR) training games have many potential benefits for autism spectrum disorder (ASD) therapy, such as increasing motivation and improving the abilities of performing daily living activities. Persons with ASD often have deficits in hand-eye coordination, which makes many activities of daily living difficult. A VR game that trains hand-eye coordination could help users with ASD improve their quality of life. Moreover, incorporating users' interests into the game could be a good way to build a motivating game for users with ASD. We propose a Customizable Virtual Human (CVH) which enables users with ASD to easily customize a virtual human and then interact with the CVH in a 3D task. Specifically, we investigated the effects of CVHs with a VR hand-eye coordination training game - Imagination Soccer - and conducted a user study on adolescents with high functioning ASD. We compared the differences of participants' 3D interaction performances, game performances and user experiences (i.e. presence, involvement, and flow) under CVH and Non-customizable Virtual Human (NCVH) conditions. The results indicate that CVHs could effectively improve performance in 3D interaction tasks (i.e., blocking a soccer ball) for users with ASD, motivate them to play the game more, and offer a better user experience.","PeriodicalId":231501,"journal":{"name":"2015 IEEE Virtual Reality (VR)","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131920358","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Wesley Griffin, Danny Catacora, S. Satterfield, J. Bullard, J. Terrill
We have created an integrated interactive visualization and analysis environment that can be used immersively or on the desktop to study a simulation of microstructure development during hydration or degradation of cement pastes and concrete. Our environment combines traditional 3D scientific data visualization with 2D information visualization using D3.js running in a web browser. By incorporating D3.js, our visualization allowed the scientist to quickly diagnose and debug errors in the parallel implementation of the simulation.
{"title":"Incorporating D3.js information visualization into immersive virtual environments","authors":"Wesley Griffin, Danny Catacora, S. Satterfield, J. Bullard, J. Terrill","doi":"10.1109/VR.2015.7223358","DOIUrl":"https://doi.org/10.1109/VR.2015.7223358","url":null,"abstract":"We have created an integrated interactive visualization and analysis environment that can be used immersively or on the desktop to study a simulation of microstructure development during hydration or degradation of cement pastes and concrete. Our environment combines traditional 3D scientific data visualization with 2D information visualization using D3.js running in a web browser. By incorporating D3.js, our visualization allowed the scientist to quickly diagnose and debug errors in the parallel implementation of the simulation.","PeriodicalId":231501,"journal":{"name":"2015 IEEE Virtual Reality (VR)","volume":"96 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133923967","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
K. Özacar, Takuma Hagiwara, Jiawei Huang, Kazuki Takashima, Y. Kitamura
We propose Coupled-clay, a bi-directional 3D collaborative interactive environment that supports the 3D modeling work between groups of users at remote locations. Coupled-clay consists of two network-connected workspaces, the Physical Interaction Space and the Virtual Interaction Space. The physical interaction space allows a user to directly manipulate a physical object whose shape and position are precisely tracked. This tracked 3D information is transferred to the virtual interaction space in real time. The virtual interaction space is made of an interactive multi-user stereoscopic 3D tabletop, or other 3D displays with adequate interaction device. The users at the virtual interaction space observe the virtual 3D object which corresponds to the physical object and manipulate its geometrical attributes (e.g., translation, rotation and scaling). Additionally, they can control the graphical attributes of the virtual object such as color and texture. Information about changes in geometrical and graphical attributes are sent back to the physical interaction space in real time and reflected to the object in the physical interaction space by a robotic arm and a top-mounted projector. Coupled-clay can be used to remotely collaborate on 3D modeling tasks such as between a skilled designer and novice learners. This paper details our Coupled-clay implementation and presents its interaction capabilities.
{"title":"Coupled-clay: Physical-virtual 3D collaborative interaction environment","authors":"K. Özacar, Takuma Hagiwara, Jiawei Huang, Kazuki Takashima, Y. Kitamura","doi":"10.1109/VR.2015.7223392","DOIUrl":"https://doi.org/10.1109/VR.2015.7223392","url":null,"abstract":"We propose Coupled-clay, a bi-directional 3D collaborative interactive environment that supports the 3D modeling work between groups of users at remote locations. Coupled-clay consists of two network-connected workspaces, the Physical Interaction Space and the Virtual Interaction Space. The physical interaction space allows a user to directly manipulate a physical object whose shape and position are precisely tracked. This tracked 3D information is transferred to the virtual interaction space in real time. The virtual interaction space is made of an interactive multi-user stereoscopic 3D tabletop, or other 3D displays with adequate interaction device. The users at the virtual interaction space observe the virtual 3D object which corresponds to the physical object and manipulate its geometrical attributes (e.g., translation, rotation and scaling). Additionally, they can control the graphical attributes of the virtual object such as color and texture. Information about changes in geometrical and graphical attributes are sent back to the physical interaction space in real time and reflected to the object in the physical interaction space by a robotic arm and a top-mounted projector. Coupled-clay can be used to remotely collaborate on 3D modeling tasks such as between a skilled designer and novice learners. This paper details our Coupled-clay implementation and presents its interaction capabilities.","PeriodicalId":231501,"journal":{"name":"2015 IEEE Virtual Reality (VR)","volume":"128 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131558055","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Individuals tend to find realistic walking speeds too slow when relying on treadmill walking or Walking-In-Place (WIP) techniques for virtual travel. This paper details three studies investigating the effects of visual display properties and gain presentation mode on the perceived naturalness of virtual walking speeds: The first study compared three different degrees of peripheral occlusion; the second study compared three different degrees of perceptual distortion produced by varying the geometric field of view (GFOV); and the third study compared three different ways of presenting visual gains. All three studies compared treadmill walking and WIP locomotion. The first study revealed no significant main effects of peripheral occlusion. The second study revealed a significant main effect of GFOV, suggesting that the GFOV size may be inversely proportional to the degree of underestimation of the visual speed. The third study found a significant main effect of gain presentation mode. Allowing participants to interactively adjust the gain led to a smaller range of perceptually natural gains and this approach was significantly faster. However, the efficiency may come at the expense of confidence. Generally the lower and upper bounds of the perceptually natural speeds were higher for treadmill walking than WIP. However, not all differences were statistically significant.
{"title":"The effect of visual display properties and gain presentation mode on the perceived naturalness of virtual walking speeds","authors":"N. C. Nilsson, S. Serafin, R. Nordahl","doi":"10.1109/VR.2015.7223328","DOIUrl":"https://doi.org/10.1109/VR.2015.7223328","url":null,"abstract":"Individuals tend to find realistic walking speeds too slow when relying on treadmill walking or Walking-In-Place (WIP) techniques for virtual travel. This paper details three studies investigating the effects of visual display properties and gain presentation mode on the perceived naturalness of virtual walking speeds: The first study compared three different degrees of peripheral occlusion; the second study compared three different degrees of perceptual distortion produced by varying the geometric field of view (GFOV); and the third study compared three different ways of presenting visual gains. All three studies compared treadmill walking and WIP locomotion. The first study revealed no significant main effects of peripheral occlusion. The second study revealed a significant main effect of GFOV, suggesting that the GFOV size may be inversely proportional to the degree of underestimation of the visual speed. The third study found a significant main effect of gain presentation mode. Allowing participants to interactively adjust the gain led to a smaller range of perceptually natural gains and this approach was significantly faster. However, the efficiency may come at the expense of confidence. Generally the lower and upper bounds of the perceptually natural speeds were higher for treadmill walking than WIP. However, not all differences were statistically significant.","PeriodicalId":231501,"journal":{"name":"2015 IEEE Virtual Reality (VR)","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134191844","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Desktop haptic device has been developed in the field of rehabilitation and entertainment. However, the desktop type restrains human's movement. Therefore, it is difficult to receive force sense information, moving to wide range position and posture. In this study, we developed a 1-DOF wearable haptic device with pneumatic artificial muscles and a MR brake. These smart actuators have high power density and change its output force structurally. Therefore, this haptic device can render various force sense such as elasticity, friction and viscosity. In this abstract, we describe two experiments rendering elasticity and friction to evaluate the performance of the device.
{"title":"Development of a wearable haptic device with pneumatic artificial muscles and MR brake","authors":"Masakazu Egawa, Takumi Watanabe, Taro Nakamura","doi":"10.1109/VR.2015.7223351","DOIUrl":"https://doi.org/10.1109/VR.2015.7223351","url":null,"abstract":"Desktop haptic device has been developed in the field of rehabilitation and entertainment. However, the desktop type restrains human's movement. Therefore, it is difficult to receive force sense information, moving to wide range position and posture. In this study, we developed a 1-DOF wearable haptic device with pneumatic artificial muscles and a MR brake. These smart actuators have high power density and change its output force structurally. Therefore, this haptic device can render various force sense such as elasticity, friction and viscosity. In this abstract, we describe two experiments rendering elasticity and friction to evaluate the performance of the device.","PeriodicalId":231501,"journal":{"name":"2015 IEEE Virtual Reality (VR)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134373304","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jean-Luc Lugrin, Maximilian Landeck, Marc Erich Latoschik
In this paper we present a preliminary study of the impact of avatar realism on illusion of virtual body ownership (IVBO), when using a full body virtual mirror for fitness training. We evaluated three main types of user representation: realistic and non-realistic avatars as well as no avatar at all. Our results revealed that same-gender realistic human avatar elicited a slightly higher level of illusion and performance. However qualitative analysis of open questions revealed that the feeling of power was higher with non-realistic strong-looking avatars.
{"title":"Avatar embodiment realism and virtual fitness training","authors":"Jean-Luc Lugrin, Maximilian Landeck, Marc Erich Latoschik","doi":"10.1109/VR.2015.7223377","DOIUrl":"https://doi.org/10.1109/VR.2015.7223377","url":null,"abstract":"In this paper we present a preliminary study of the impact of avatar realism on illusion of virtual body ownership (IVBO), when using a full body virtual mirror for fitness training. We evaluated three main types of user representation: realistic and non-realistic avatars as well as no avatar at all. Our results revealed that same-gender realistic human avatar elicited a slightly higher level of illusion and performance. However qualitative analysis of open questions revealed that the feeling of power was higher with non-realistic strong-looking avatars.","PeriodicalId":231501,"journal":{"name":"2015 IEEE Virtual Reality (VR)","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133236302","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Victor Adriel de Jesus Oliveira, Wilson J. Sarmiento, Anderson Maciel, L. Nedel, C. Collazos
Communication is a fundamental process in collaborative work. In natural conditions, communication between team members is multimodal. This allows for redundancy, adaptation to different contexts, and different levels of focus. In collaborative virtual environments, however, hardware limitations and lack of appropriate interaction metaphors reduce the amount of collaboration. In this poster, we propose the design and use of a vibrotactile language to improve user intercommunication in CVE and, consequently, to increase the amount of effective collaboration.
{"title":"Does vibrotactile intercommunication increase collaboration?","authors":"Victor Adriel de Jesus Oliveira, Wilson J. Sarmiento, Anderson Maciel, L. Nedel, C. Collazos","doi":"10.1109/VR.2015.7223391","DOIUrl":"https://doi.org/10.1109/VR.2015.7223391","url":null,"abstract":"Communication is a fundamental process in collaborative work. In natural conditions, communication between team members is multimodal. This allows for redundancy, adaptation to different contexts, and different levels of focus. In collaborative virtual environments, however, hardware limitations and lack of appropriate interaction metaphors reduce the amount of collaboration. In this poster, we propose the design and use of a vibrotactile language to improve user intercommunication in CVE and, consequently, to increase the amount of effective collaboration.","PeriodicalId":231501,"journal":{"name":"2015 IEEE Virtual Reality (VR)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117136591","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
H. Chaturvedi, Nathan D. Newsome, Sabarish V. Babu
The effectiveness of visual realism of virtual characters in engaging users and eliciting affective responses has been an open question. We empirically evaluated the effects of realistic vs. non-realistic rendering of virtual humans on the emotional response of participants in a medical virtual reality system that was designed to educate users to recognize the signs and symptoms of patient deterioration. In a between-subjects experiment protocol, participants interacted with one of three different appearances of a virtual patient, namely realistic, non-realistic cartoon-shaded and charcoal-sketch like conditions. Emotional impact of the rendering conditions was measured via a combination of subjective and objective metrics.
{"title":"An evaluation of virtual human appearance fidelity on user's positive and negative affect in human-virtual human interaction","authors":"H. Chaturvedi, Nathan D. Newsome, Sabarish V. Babu","doi":"10.1109/VR.2015.7223346","DOIUrl":"https://doi.org/10.1109/VR.2015.7223346","url":null,"abstract":"The effectiveness of visual realism of virtual characters in engaging users and eliciting affective responses has been an open question. We empirically evaluated the effects of realistic vs. non-realistic rendering of virtual humans on the emotional response of participants in a medical virtual reality system that was designed to educate users to recognize the signs and symptoms of patient deterioration. In a between-subjects experiment protocol, participants interacted with one of three different appearances of a virtual patient, namely realistic, non-realistic cartoon-shaded and charcoal-sketch like conditions. Emotional impact of the rendering conditions was measured via a combination of subjective and objective metrics.","PeriodicalId":231501,"journal":{"name":"2015 IEEE Virtual Reality (VR)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114675454","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hyejin Kim, Elisabeth Adelia Widjojo, Jae-In Hwang
This paper presents a novel bare-hand interaction method for wearable AR (augmented reality). The suggested method is using hierarchical virtual buttons which are placed on the image target. Therefore, we can provide precise hand interaction on the image target surface (while using wearable AR). The method operates on a wearable AR system and uses an image target tracker to make occlusion-based interaction button. We introduce the hierarchical virtual button method which is adequate for more precise and faster interaction with augmented objects.
{"title":"Dynamic hierarchical virtual button-based hand interaction for wearable AR","authors":"Hyejin Kim, Elisabeth Adelia Widjojo, Jae-In Hwang","doi":"10.1109/VR.2015.7223368","DOIUrl":"https://doi.org/10.1109/VR.2015.7223368","url":null,"abstract":"This paper presents a novel bare-hand interaction method for wearable AR (augmented reality). The suggested method is using hierarchical virtual buttons which are placed on the image target. Therefore, we can provide precise hand interaction on the image target surface (while using wearable AR). The method operates on a wearable AR system and uses an image target tracker to make occlusion-based interaction button. We introduce the hierarchical virtual button method which is adequate for more precise and faster interaction with augmented objects.","PeriodicalId":231501,"journal":{"name":"2015 IEEE Virtual Reality (VR)","volume":"55 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123895071","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. Rodrigue, A. Waranis, Tim Wood, Tobias Höllerer
This paper presents the design and implementation of a system for simulating mixed reality in setups combining mobile devices and large backdrop displays. With a mixed reality simulator, one can perform usability studies and evaluate mixed reality systems while minimizing confounding variables. This paper describes how mobile device AR design factors can be flexibly and systematically explored without sacrificing the touch and direct unobstructed manipulation of a physical personal MR display. First, we describe general principles to consider when implementing a mixed reality simulator, enumerating design factors. Then, we present our implementation which utilizes personal mobile display devices in conjunction with a large surround-view display environment. Standing in the center of the display, a user may direct a mobile device, such as a tablet or head-mounted display, to a portion of the scene, which affords them a potentially annotated view of the area of interest. The user may employ gesture or touch screen interaction on a simulated augmented camera feed, as they typically would in video-see-through mixed reality applications. We present calibration and system performance results and illustrate our system's flexibility by presenting the design of three usability evaluation scenarios.
{"title":"Mixed reality simulation with physical mobile display devices","authors":"M. Rodrigue, A. Waranis, Tim Wood, Tobias Höllerer","doi":"10.1109/VR.2015.7223331","DOIUrl":"https://doi.org/10.1109/VR.2015.7223331","url":null,"abstract":"This paper presents the design and implementation of a system for simulating mixed reality in setups combining mobile devices and large backdrop displays. With a mixed reality simulator, one can perform usability studies and evaluate mixed reality systems while minimizing confounding variables. This paper describes how mobile device AR design factors can be flexibly and systematically explored without sacrificing the touch and direct unobstructed manipulation of a physical personal MR display. First, we describe general principles to consider when implementing a mixed reality simulator, enumerating design factors. Then, we present our implementation which utilizes personal mobile display devices in conjunction with a large surround-view display environment. Standing in the center of the display, a user may direct a mobile device, such as a tablet or head-mounted display, to a portion of the scene, which affords them a potentially annotated view of the area of interest. The user may employ gesture or touch screen interaction on a simulated augmented camera feed, as they typically would in video-see-through mixed reality applications. We present calibration and system performance results and illustrate our system's flexibility by presenting the design of three usability evaluation scenarios.","PeriodicalId":231501,"journal":{"name":"2015 IEEE Virtual Reality (VR)","volume":"160 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124490168","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}