Pub Date : 2023-07-31DOI: 10.3389/frvir.2023.1257230
Jiayan Zhao, B. Riecke, Jonathan W. Kelly, Jeanine Stefanucci, A. Klippel
{"title":"Editorial: Human spatial perception, cognition, and behaviour in extended reality","authors":"Jiayan Zhao, B. Riecke, Jonathan W. Kelly, Jeanine Stefanucci, A. Klippel","doi":"10.3389/frvir.2023.1257230","DOIUrl":"https://doi.org/10.3389/frvir.2023.1257230","url":null,"abstract":"","PeriodicalId":73116,"journal":{"name":"Frontiers in virtual reality","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48151668","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-07-31DOI: 10.3389/frvir.2023.1171230
Richard Nguyen, Charles Gouin-Vallerand, M. Amiri
Mixed reality has made its first step towards democratization in 2017 with the launch of a first generation of commercial devices. As a new medium, one of the challenges is to develop interactions using its endowed spatial awareness and body tracking. More specifically, at the crossroad between artificial intelligence and human-computer interaction, the goal is to go beyond the Window, Icon, Menu, Pointer (WIMP) paradigm humans are mainly using on desktop computer. Hand interactions either as a standalone modality or as a component of a multimodal modality are one of the most popular and supported techniques across mixed reality prototypes and commercial devices. In this context, this paper presents scoping literature review of hand interactions in mixed reality. The goal of this review is to identify the recent findings on hand interactions about their design and the place of artificial intelligence in their development and behavior. This review resulted in the highlight of the main interaction techniques and their technical requirements between 2017 and 2022 as well as the design of the Metaphor-behavior taxonomy to classify those interactions.
{"title":"Hand interaction designs in mixed and augmented reality head mounted display: a scoping review and classification","authors":"Richard Nguyen, Charles Gouin-Vallerand, M. Amiri","doi":"10.3389/frvir.2023.1171230","DOIUrl":"https://doi.org/10.3389/frvir.2023.1171230","url":null,"abstract":"Mixed reality has made its first step towards democratization in 2017 with the launch of a first generation of commercial devices. As a new medium, one of the challenges is to develop interactions using its endowed spatial awareness and body tracking. More specifically, at the crossroad between artificial intelligence and human-computer interaction, the goal is to go beyond the Window, Icon, Menu, Pointer (WIMP) paradigm humans are mainly using on desktop computer. Hand interactions either as a standalone modality or as a component of a multimodal modality are one of the most popular and supported techniques across mixed reality prototypes and commercial devices. In this context, this paper presents scoping literature review of hand interactions in mixed reality. The goal of this review is to identify the recent findings on hand interactions about their design and the place of artificial intelligence in their development and behavior. This review resulted in the highlight of the main interaction techniques and their technical requirements between 2017 and 2022 as well as the design of the Metaphor-behavior taxonomy to classify those interactions.","PeriodicalId":73116,"journal":{"name":"Frontiers in virtual reality","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44910998","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-07-25DOI: 10.3389/frvir.2023.1195196
D. Franklin, Charles Silvestro, Robert A. Carrillo, Yewon Yang, Dharani Annadurai, Sangavai Ganesan, Divya Sai Jyothi Vasantham, Soujanya Mettu, Mehal Patel, Manasi S. Patil, Nandini Devi Akurathi
Patients diagnosed with cancer experience a high degree of stress as well as side effects from treatments that can greatly impact their quality of life. Many patients experience long-term side effects such as pain, fatigue, anxiety, depression, and cognitive dysfunction. Several studies have reported that the use of virtual reality (VR) interventions show substantial benefits in reducing symptoms of anxiety, depression, pain, and cognitive functions in cancer patients undergoing therapy. In this study we analyzed the acceptability, feasibility, and tolerance of PNI Thrive, a 10-min VR guided meditation application, as an adjuvant digital therapeutic aid for cancer patients in a clinical setting. Patients diagnosed with various cancers, and at different stages of therapy, participated in this study. Our data suggests that the adjuvant VR treatment was successful in making patients feel calmer, more relaxed, refreshed, and more empowered. We propose that routine exposure of patients to VR interventions will help improve their response to anti-cancer therapies and quality of life.
{"title":"The impact of meditation aided by VR technology as an emerging therapeutic to ease cancer related anxiety, stress, and fatigue","authors":"D. Franklin, Charles Silvestro, Robert A. Carrillo, Yewon Yang, Dharani Annadurai, Sangavai Ganesan, Divya Sai Jyothi Vasantham, Soujanya Mettu, Mehal Patel, Manasi S. Patil, Nandini Devi Akurathi","doi":"10.3389/frvir.2023.1195196","DOIUrl":"https://doi.org/10.3389/frvir.2023.1195196","url":null,"abstract":"Patients diagnosed with cancer experience a high degree of stress as well as side effects from treatments that can greatly impact their quality of life. Many patients experience long-term side effects such as pain, fatigue, anxiety, depression, and cognitive dysfunction. Several studies have reported that the use of virtual reality (VR) interventions show substantial benefits in reducing symptoms of anxiety, depression, pain, and cognitive functions in cancer patients undergoing therapy. In this study we analyzed the acceptability, feasibility, and tolerance of PNI Thrive, a 10-min VR guided meditation application, as an adjuvant digital therapeutic aid for cancer patients in a clinical setting. Patients diagnosed with various cancers, and at different stages of therapy, participated in this study. Our data suggests that the adjuvant VR treatment was successful in making patients feel calmer, more relaxed, refreshed, and more empowered. We propose that routine exposure of patients to VR interventions will help improve their response to anti-cancer therapies and quality of life.","PeriodicalId":73116,"journal":{"name":"Frontiers in virtual reality","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47202377","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-07-19DOI: 10.3389/frvir.2023.1209535
Samantha Lewis-Fung, Danielle Tchao, H. Gray, Emma Nguyen, S. Pardini, L. Harris, Dale Calabia, Lora Appel
Introduction: Anxiety in people with epilepsy (PwE) is characterized by distinct features related to having the condition and thus requires tailored treatment. Although virtual reality (VR) exposure therapy is widely-used to treat a number of anxiety disorders, its use has not yet been explored in people with epilepsy. The AnxEpiVR study is a three-phase pilot trial that represents the first effort to design and evaluate the feasibility of VR exposure therapy to treat epilepsy-specific interictal anxiety. This paper describes the results of the design phase (Phase 2) where we created a minimum viable product of VR exposure scenarios to be tested with PwE in Phase 3.Methods: Phase 2 employed participatory design methods and hybrid (online and in-person) focus groups involving people with lived experience (n = 5) to design the VR exposure therapy program. 360-degree video was chosen as the medium and scenes were filmed using the Ricoh Theta Z1 360-degree camera.Results: Our minimum viable product includes three exposure scenarios: (A) Social Scene—Dinner Party, (B) Public Setting—Subway, and (C) Public Setting—Shopping Mall. Each scenario contains seven 5-minute scenes of varying intensity, from which a subset may be chosen and ordered to create a customized hierarchy based on appropriateness to the individual’s specific fears. Our collaborators with lived experience who tested the product considered the exposure therapy program to 1) be safe for PwE, 2) have a high level of fidelity and 3) be appropriate for treating a broad range of fears related to epilepsy/seizures.Discussion: We were able to show that 360-degree videos are capable of achieving a realistic, immersive experience for the user without requiring extensive technical training for the designer. Strengths and limitations using 360-degree video for designing exposure scenarios for PwE are described, along with future directions for testing and refining the product.
{"title":"Designing virtual reality exposure scenarios to treat anxiety in people with epilepsy: Phase 2 of the AnxEpiVR clinical trial","authors":"Samantha Lewis-Fung, Danielle Tchao, H. Gray, Emma Nguyen, S. Pardini, L. Harris, Dale Calabia, Lora Appel","doi":"10.3389/frvir.2023.1209535","DOIUrl":"https://doi.org/10.3389/frvir.2023.1209535","url":null,"abstract":"Introduction: Anxiety in people with epilepsy (PwE) is characterized by distinct features related to having the condition and thus requires tailored treatment. Although virtual reality (VR) exposure therapy is widely-used to treat a number of anxiety disorders, its use has not yet been explored in people with epilepsy. The AnxEpiVR study is a three-phase pilot trial that represents the first effort to design and evaluate the feasibility of VR exposure therapy to treat epilepsy-specific interictal anxiety. This paper describes the results of the design phase (Phase 2) where we created a minimum viable product of VR exposure scenarios to be tested with PwE in Phase 3.Methods: Phase 2 employed participatory design methods and hybrid (online and in-person) focus groups involving people with lived experience (n = 5) to design the VR exposure therapy program. 360-degree video was chosen as the medium and scenes were filmed using the Ricoh Theta Z1 360-degree camera.Results: Our minimum viable product includes three exposure scenarios: (A) Social Scene—Dinner Party, (B) Public Setting—Subway, and (C) Public Setting—Shopping Mall. Each scenario contains seven 5-minute scenes of varying intensity, from which a subset may be chosen and ordered to create a customized hierarchy based on appropriateness to the individual’s specific fears. Our collaborators with lived experience who tested the product considered the exposure therapy program to 1) be safe for PwE, 2) have a high level of fidelity and 3) be appropriate for treating a broad range of fears related to epilepsy/seizures.Discussion: We were able to show that 360-degree videos are capable of achieving a realistic, immersive experience for the user without requiring extensive technical training for the designer. Strengths and limitations using 360-degree video for designing exposure scenarios for PwE are described, along with future directions for testing and refining the product.","PeriodicalId":73116,"journal":{"name":"Frontiers in virtual reality","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48336465","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-07-19DOI: 10.3389/frvir.2023.1169313
Dennis Reimer, Iana Podkosova, D. Scherzer, H. Kaufmann
Hand tracking has become a state-of-the-art technology in the modern generation of consumer VR devices. However, off-the-shelf solutions do not support hand detection for more than two hands at the same time at distances beyond arm’s length. The possibility to track multiple hands at larger distances would be beneficial for colocated multi-user VR scenarios, allowing user-worn devices to track the hands of other users and therefore reducing motion artifacts caused by hand tracking loss. With the global focus of enabling natural hand interactions in colocated multi-user VR, we propose an RGB image input-based hand tracking method, built upon the MediaPipe framework, that can track multiple hands at once at distances of up to 3 m. We compared our method’s accuracy to that of Oculus Quest and Leap Motion, at different distances from the tracking device and in static and dynamic settings. The results of our evaluation show that our method provides only slightly less accurate results than Oculus Quest or Leap motion in the near range (with median errors below 1.75 cm at distances below 75 cm); at larger distances, its accuracy remains stable (with a median error of 4.7 cm at the distance of 2.75 m) while Leap Motion and Oculus Quest either loose tracking or produce very inaccurate results. Taking into account the broad choice of suitable hardware (any RGB camera) and the ease of setup, our method can be directly applied to colocated multi-user VR scenarios.
{"title":"Evaluation and improvement of HMD-based and RGB-based hand tracking solutions in VR","authors":"Dennis Reimer, Iana Podkosova, D. Scherzer, H. Kaufmann","doi":"10.3389/frvir.2023.1169313","DOIUrl":"https://doi.org/10.3389/frvir.2023.1169313","url":null,"abstract":"Hand tracking has become a state-of-the-art technology in the modern generation of consumer VR devices. However, off-the-shelf solutions do not support hand detection for more than two hands at the same time at distances beyond arm’s length. The possibility to track multiple hands at larger distances would be beneficial for colocated multi-user VR scenarios, allowing user-worn devices to track the hands of other users and therefore reducing motion artifacts caused by hand tracking loss. With the global focus of enabling natural hand interactions in colocated multi-user VR, we propose an RGB image input-based hand tracking method, built upon the MediaPipe framework, that can track multiple hands at once at distances of up to 3 m. We compared our method’s accuracy to that of Oculus Quest and Leap Motion, at different distances from the tracking device and in static and dynamic settings. The results of our evaluation show that our method provides only slightly less accurate results than Oculus Quest or Leap motion in the near range (with median errors below 1.75 cm at distances below 75 cm); at larger distances, its accuracy remains stable (with a median error of 4.7 cm at the distance of 2.75 m) while Leap Motion and Oculus Quest either loose tracking or produce very inaccurate results. Taking into account the broad choice of suitable hardware (any RGB camera) and the ease of setup, our method can be directly applied to colocated multi-user VR scenarios.","PeriodicalId":73116,"journal":{"name":"Frontiers in virtual reality","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47246242","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-07-18DOI: 10.3389/frvir.2023.1198024
Chi-Lan Yang, Keigo Matsumoto, Songping Yu, Leo Sawada, Kiyoaki Arakawa, Daisuke Yamada, H. Kuzuoka
Social VR enables people to join a remote discussion by keeping a high social presence and physical proximity using embodied avatars. However, the missing nonverbal cues, such as mutual eye contact and nuanced facial expression, make it challenging for distributed members to manage turn-taking, which could lead to unequal participation and affect trust building. Therefore, we propose a virtual moderator to make distributed members feel included by seeing their nonverbal behavior. The virtual moderator was designed with a “prompt Q&A″ feature to enable users to share feedback and an “attention guidance” feature to encourage participation. The preliminary result of a controlled experiment in social VR with 30 participants showed that seeing the virtual moderator’s body orientation enhanced participants’ psychological safety. In contrast, the prompt Q&A feature enhanced the perceived co-presence of their remote counterparts. We discussed how nonverbal behavior could be designed using a virtual moderator to shape human perception of the group discussion in social VR. We also pointed out challenges when providing multiple supports simultaneously in social VR.
{"title":"Understanding the effect of a virtual moderator on people’s perception in remote discussion using social VR","authors":"Chi-Lan Yang, Keigo Matsumoto, Songping Yu, Leo Sawada, Kiyoaki Arakawa, Daisuke Yamada, H. Kuzuoka","doi":"10.3389/frvir.2023.1198024","DOIUrl":"https://doi.org/10.3389/frvir.2023.1198024","url":null,"abstract":"Social VR enables people to join a remote discussion by keeping a high social presence and physical proximity using embodied avatars. However, the missing nonverbal cues, such as mutual eye contact and nuanced facial expression, make it challenging for distributed members to manage turn-taking, which could lead to unequal participation and affect trust building. Therefore, we propose a virtual moderator to make distributed members feel included by seeing their nonverbal behavior. The virtual moderator was designed with a “prompt Q&A″ feature to enable users to share feedback and an “attention guidance” feature to encourage participation. The preliminary result of a controlled experiment in social VR with 30 participants showed that seeing the virtual moderator’s body orientation enhanced participants’ psychological safety. In contrast, the prompt Q&A feature enhanced the perceived co-presence of their remote counterparts. We discussed how nonverbal behavior could be designed using a virtual moderator to shape human perception of the group discussion in social VR. We also pointed out challenges when providing multiple supports simultaneously in social VR.","PeriodicalId":73116,"journal":{"name":"Frontiers in virtual reality","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42040879","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-07-18DOI: 10.3389/frvir.2023.1104638
Alex van den Berg, B. D. Vries, Zoë Breedveld, Annelouk van Mierlo, Marnix Tijhuis, L. Marchal-Crespo
Immersive Virtual Reality (IVR) has gained popularity in neurorehabilitation for its potential to increase patients’ motivation and engagement. A crucial yet relatively unexplored aspect of IVR interfaces is the patients’ representation in the virtual world, such as with an avatar. A higher level of embodiment over avatars has been shown to enhance motor performance during upper limb training and has the potential to be employed to enhance neurorehabilitation. However, the relationship between avatar embodiment and gait performance remains unexplored. In this work, we present the results of a pilot study with 12 healthy young participants that evaluates the effect of different virtual lower limb representations on foot placement accuracy while stepping over a trail of 16 virtual targets. We compared three levels of virtual representation: i) a full-body avatar, ii) only feet, and iii) no representation. Full-body tracking is computed using standard VR trackers to synchronize the avatar with the participants’ motions. Foot placement accuracy is measured as the distance between the foot’s center of mass and the center of the selected virtual target. Additionally, we evaluated the level of embodiment over each virtual representation through a questionnaire. Our findings indicate that foot placement accuracy increases with some form of virtual representation, either full-body or foot, compared to having no virtual representation. However, the foot and full-body representations do not show significant differences in accuracy. Importantly, we found a negative correlation between the level of embodiment of the foot representation and the distance between the placed foot and the target. However, no such correlation was found for the full-body representation. Our results highlight the importance of embodying a virtual representation of the foot when performing a task that requires accurate foot placement. However, showing a full-body avatar does not appear to further enhance accuracy. Moreover, our results suggest that the level of embodiment of the virtual feet might modulate motor performance in this stepping task. This work motivates future research on the effect of embodiment over virtual representations on motor control to be exploited for IVR gait rehabilitation.
{"title":"Embodiment of virtual feet correlates with motor performance in a target-stepping task: a pilot study","authors":"Alex van den Berg, B. D. Vries, Zoë Breedveld, Annelouk van Mierlo, Marnix Tijhuis, L. Marchal-Crespo","doi":"10.3389/frvir.2023.1104638","DOIUrl":"https://doi.org/10.3389/frvir.2023.1104638","url":null,"abstract":"Immersive Virtual Reality (IVR) has gained popularity in neurorehabilitation for its potential to increase patients’ motivation and engagement. A crucial yet relatively unexplored aspect of IVR interfaces is the patients’ representation in the virtual world, such as with an avatar. A higher level of embodiment over avatars has been shown to enhance motor performance during upper limb training and has the potential to be employed to enhance neurorehabilitation. However, the relationship between avatar embodiment and gait performance remains unexplored. In this work, we present the results of a pilot study with 12 healthy young participants that evaluates the effect of different virtual lower limb representations on foot placement accuracy while stepping over a trail of 16 virtual targets. We compared three levels of virtual representation: i) a full-body avatar, ii) only feet, and iii) no representation. Full-body tracking is computed using standard VR trackers to synchronize the avatar with the participants’ motions. Foot placement accuracy is measured as the distance between the foot’s center of mass and the center of the selected virtual target. Additionally, we evaluated the level of embodiment over each virtual representation through a questionnaire. Our findings indicate that foot placement accuracy increases with some form of virtual representation, either full-body or foot, compared to having no virtual representation. However, the foot and full-body representations do not show significant differences in accuracy. Importantly, we found a negative correlation between the level of embodiment of the foot representation and the distance between the placed foot and the target. However, no such correlation was found for the full-body representation. Our results highlight the importance of embodying a virtual representation of the foot when performing a task that requires accurate foot placement. However, showing a full-body avatar does not appear to further enhance accuracy. Moreover, our results suggest that the level of embodiment of the virtual feet might modulate motor performance in this stepping task. This work motivates future research on the effect of embodiment over virtual representations on motor control to be exploited for IVR gait rehabilitation.","PeriodicalId":73116,"journal":{"name":"Frontiers in virtual reality","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45292700","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-07-17DOI: 10.3389/frvir.2023.1142241
Delfina Bilello, Lucy J. Swancott, Juliane A. Kloess, S. Burnett Heyes
Introduction: Gang involvement poses serious risks to young people, including antisocial and criminal behaviour, sexual and criminal exploitation, and mental health problems. There is a need for research-informed development of preventive interventions. To this end, we conducted a qualitative study of young people’s responses to an educational virtual reality (VR) experience of an encounter with a gang, to understand young people’s decisions, emotions and consequences.Methods: Young people (N = 24 aged 13-15, 11 female, 13 male) underwent the VR experience followed by semi-structured focus group discussions. Questions focused on virtual decision-making (motivations, thoughts, feelings, consequences) and user experiences of taking part. Data were analysed using Thematic Analysis.Results: Three themes were developed to represent how participants’ perceptions of the gang, themselves, and the context influenced virtual decisions. Social pressure from the gang competed with participants’ wish to stand by their morals and establish individual identity. The VR setting, through its escalating events and plausible characters, created an “illusion of reality” and sense of authentic decisions and emotions, yielding insights for real-life in a safe, virtual environment.Discussion: Findings shed light on processes influencing adolescent decision-making in a virtual context of risk-taking, peer pressure and contact with a gang. Particularly, they highlight the potential for using VR in interventions with young people, given its engaging and realistic nature.
{"title":"Adolescent risk-taking and decision making: a qualitative investigation of a virtual reality experience of gangs and violence","authors":"Delfina Bilello, Lucy J. Swancott, Juliane A. Kloess, S. Burnett Heyes","doi":"10.3389/frvir.2023.1142241","DOIUrl":"https://doi.org/10.3389/frvir.2023.1142241","url":null,"abstract":"Introduction: Gang involvement poses serious risks to young people, including antisocial and criminal behaviour, sexual and criminal exploitation, and mental health problems. There is a need for research-informed development of preventive interventions. To this end, we conducted a qualitative study of young people’s responses to an educational virtual reality (VR) experience of an encounter with a gang, to understand young people’s decisions, emotions and consequences.Methods: Young people (N = 24 aged 13-15, 11 female, 13 male) underwent the VR experience followed by semi-structured focus group discussions. Questions focused on virtual decision-making (motivations, thoughts, feelings, consequences) and user experiences of taking part. Data were analysed using Thematic Analysis.Results: Three themes were developed to represent how participants’ perceptions of the gang, themselves, and the context influenced virtual decisions. Social pressure from the gang competed with participants’ wish to stand by their morals and establish individual identity. The VR setting, through its escalating events and plausible characters, created an “illusion of reality” and sense of authentic decisions and emotions, yielding insights for real-life in a safe, virtual environment.Discussion: Findings shed light on processes influencing adolescent decision-making in a virtual context of risk-taking, peer pressure and contact with a gang. Particularly, they highlight the potential for using VR in interventions with young people, given its engaging and realistic nature.","PeriodicalId":73116,"journal":{"name":"Frontiers in virtual reality","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-07-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47792018","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-07-14DOI: 10.3389/frvir.2023.1174701
Sojung Bahng, Victoria McArthur, Ryan M. Kelly
Virtual reality (VR) is increasingly being used as a tool for eliciting empathy and emotional identification in fact-based stories. However, it may not be clear whether VR stories authentically deliver the protagonists’ perspectives if the works are not created by or with the protagonists themselves. Therefore, it is crucial for the VR community to explore effective methods for democratizing VR storytelling, and to support novice VR designers in creating autobiographical stories. In this paper, we report findings from a collaborative design research project that aimed to create autobiographical stories with novice VR designers who lacked experience in VR storytelling. We collaborated with university students in Canada to design eight individual VR stories that expressed each student’s experiences of lockdown, during the early stages of the COVID-19 pandemic. We conducted interviews with the students to understand how VR contributed to conveying their individual experiences. Our findings demonstrate how immersive VR can be used as a meaningful tool for sharing autobiographical stories by delivering the character’s feelings, creating a sense of confinement and isolation, expressing inner worlds, and showing environmental details. Our discussion draws attention to the significance of careful camera positioning and movement in VR story design, the meaningful use of limited interaction and disorienting components, and the balance between spatial and temporal information in a three-dimensional environment. Our study highlights the potential of VR as an autobiographical storytelling tool and demonstrates how VR stories can be created through iterative collaboration between VR experts and novices.
{"title":"Designing immersive stories with novice VR creators: a study of autobiographical VR storytelling during the COVID-19 pandemic","authors":"Sojung Bahng, Victoria McArthur, Ryan M. Kelly","doi":"10.3389/frvir.2023.1174701","DOIUrl":"https://doi.org/10.3389/frvir.2023.1174701","url":null,"abstract":"Virtual reality (VR) is increasingly being used as a tool for eliciting empathy and emotional identification in fact-based stories. However, it may not be clear whether VR stories authentically deliver the protagonists’ perspectives if the works are not created by or with the protagonists themselves. Therefore, it is crucial for the VR community to explore effective methods for democratizing VR storytelling, and to support novice VR designers in creating autobiographical stories. In this paper, we report findings from a collaborative design research project that aimed to create autobiographical stories with novice VR designers who lacked experience in VR storytelling. We collaborated with university students in Canada to design eight individual VR stories that expressed each student’s experiences of lockdown, during the early stages of the COVID-19 pandemic. We conducted interviews with the students to understand how VR contributed to conveying their individual experiences. Our findings demonstrate how immersive VR can be used as a meaningful tool for sharing autobiographical stories by delivering the character’s feelings, creating a sense of confinement and isolation, expressing inner worlds, and showing environmental details. Our discussion draws attention to the significance of careful camera positioning and movement in VR story design, the meaningful use of limited interaction and disorienting components, and the balance between spatial and temporal information in a three-dimensional environment. Our study highlights the potential of VR as an autobiographical storytelling tool and demonstrates how VR stories can be created through iterative collaboration between VR experts and novices.","PeriodicalId":73116,"journal":{"name":"Frontiers in virtual reality","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-07-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44772911","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-07-14DOI: 10.3389/frvir.2023.1047833
M. Johnson-Glenberg, C. S. Yu, F. Liu, Charles Amador, Yueming Bao, Shufan Yu, R. Likamwa
Researchers, educators, and multimedia designers need to better understand how mixing physical tangible objects with virtual experiences affects learning and science identity. In this novel study, a 3D-printed tangible that is an accurate facsimile of the sort of expensive glassware that chemists use in real laboratories is tethered to a laptop with a digitized lesson. Interactive educational content is increasingly being placed online, it is important to understand the educational boundary conditions associated with passive haptics and 3D-printed manipulables. Cost-effective printed objects would be particularly welcome in rural and low Socio-Economic (SES) classrooms. A Mixed Reality (MR) experience was created that used a physical 3D-printed haptic burette to control a computer-based chemistry titration experiment. This randomized control trial study with 136 college students had two conditions: 1) low-embodied control (using keyboard arrows), and 2) high-embodied experimental (physically turning a valve/stopcock on the 3D-printed burette). Although both groups displayed similar significant gains on the declarative knowledge test, deeper analyses revealed nuanced Aptitude by Treatment Interactions (ATIs). These interactions favored the high-embodied experimental group that used the MR device for both titration-specific posttest knowledge questions and for science efficacy and science identity. Those students with higher prior science knowledge displayed higher titration knowledge scores after using the experimental 3D-printed haptic device. A multi-modal linguistic and gesture analysis revealed that during recall the experimental participants used the stopcock-turning gesture significantly more often, and their recalls created a significantly different Epistemic Network Analysis (ENA). ENA is a type of 2D projection of the recall data, stronger connections were seen in the high embodied group mainly centering on the key hand-turning gesture. Instructors and designers should consider the multi-modal and multi-dimensional nature of the user interface, and how the addition of another sensory-based learning signal (haptics) might differentially affect lower prior knowledge students. One hypothesis is that haptically manipulating novel devices during learning may create more cognitive load. For low prior knowledge students, it may be advantageous for them to begin learning content on a more ubiquitous interface (e.g., keyboard) before moving them to more novel, multi-modal MR devices/interfaces.
{"title":"Embodied mixed reality with passive haptics in STEM education: randomized control study with chemistry titration","authors":"M. Johnson-Glenberg, C. S. Yu, F. Liu, Charles Amador, Yueming Bao, Shufan Yu, R. Likamwa","doi":"10.3389/frvir.2023.1047833","DOIUrl":"https://doi.org/10.3389/frvir.2023.1047833","url":null,"abstract":"Researchers, educators, and multimedia designers need to better understand how mixing physical tangible objects with virtual experiences affects learning and science identity. In this novel study, a 3D-printed tangible that is an accurate facsimile of the sort of expensive glassware that chemists use in real laboratories is tethered to a laptop with a digitized lesson. Interactive educational content is increasingly being placed online, it is important to understand the educational boundary conditions associated with passive haptics and 3D-printed manipulables. Cost-effective printed objects would be particularly welcome in rural and low Socio-Economic (SES) classrooms. A Mixed Reality (MR) experience was created that used a physical 3D-printed haptic burette to control a computer-based chemistry titration experiment. This randomized control trial study with 136 college students had two conditions: 1) low-embodied control (using keyboard arrows), and 2) high-embodied experimental (physically turning a valve/stopcock on the 3D-printed burette). Although both groups displayed similar significant gains on the declarative knowledge test, deeper analyses revealed nuanced Aptitude by Treatment Interactions (ATIs). These interactions favored the high-embodied experimental group that used the MR device for both titration-specific posttest knowledge questions and for science efficacy and science identity. Those students with higher prior science knowledge displayed higher titration knowledge scores after using the experimental 3D-printed haptic device. A multi-modal linguistic and gesture analysis revealed that during recall the experimental participants used the stopcock-turning gesture significantly more often, and their recalls created a significantly different Epistemic Network Analysis (ENA). ENA is a type of 2D projection of the recall data, stronger connections were seen in the high embodied group mainly centering on the key hand-turning gesture. Instructors and designers should consider the multi-modal and multi-dimensional nature of the user interface, and how the addition of another sensory-based learning signal (haptics) might differentially affect lower prior knowledge students. One hypothesis is that haptically manipulating novel devices during learning may create more cognitive load. For low prior knowledge students, it may be advantageous for them to begin learning content on a more ubiquitous interface (e.g., keyboard) before moving them to more novel, multi-modal MR devices/interfaces.","PeriodicalId":73116,"journal":{"name":"Frontiers in virtual reality","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-07-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45287260","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}