Pub Date : 2023-11-20DOI: 10.3389/frvir.2023.1294539
Domna Banakou, Mel Slater
Moving through a virtual environment that is larger than the physical space in which the participant operates has been a challenge since the early days of virtual reality. Many different methods have been proposed, such as joystick-based navigation, walking in place where the participant makes walking movements but is stationary in the physical space, and redirected walking where the environment is surreptitiously changed giving the illusion of walking in a long straight line in the virtual space but maybe a circle in the physical space. Each type of method has its limitations, ranging from simulator sickness to still requiring more physical space than is available. Stimulated by the COVID-19 lockdown, we developed a new method of locomotion which we refer to as interactive redirected walking. Here, the participant really walks but, when reaching a boundary, rotates the virtual world so that continuation of walking is always within the physical boundary. We carried out an exploratory study to compare this method with walking in place with respect to presence using questionnaires as well as qualitative responses based on comments written by the participants that were subjected to sentiment analysis. Surprisingly, we found that smaller physical boundaries favor interactive redirected walking, but for boundary lengths more than approximately 7 adult paces, the walking-in-place method is preferable.
{"title":"A comparison of two methods for moving through a virtual environment: walking in place and interactive redirected walking","authors":"Domna Banakou, Mel Slater","doi":"10.3389/frvir.2023.1294539","DOIUrl":"https://doi.org/10.3389/frvir.2023.1294539","url":null,"abstract":"Moving through a virtual environment that is larger than the physical space in which the participant operates has been a challenge since the early days of virtual reality. Many different methods have been proposed, such as joystick-based navigation, walking in place where the participant makes walking movements but is stationary in the physical space, and redirected walking where the environment is surreptitiously changed giving the illusion of walking in a long straight line in the virtual space but maybe a circle in the physical space. Each type of method has its limitations, ranging from simulator sickness to still requiring more physical space than is available. Stimulated by the COVID-19 lockdown, we developed a new method of locomotion which we refer to as interactive redirected walking. Here, the participant really walks but, when reaching a boundary, rotates the virtual world so that continuation of walking is always within the physical boundary. We carried out an exploratory study to compare this method with walking in place with respect to presence using questionnaires as well as qualitative responses based on comments written by the participants that were subjected to sentiment analysis. Surprisingly, we found that smaller physical boundaries favor interactive redirected walking, but for boundary lengths more than approximately 7 adult paces, the walking-in-place method is preferable.","PeriodicalId":73116,"journal":{"name":"Frontiers in virtual reality","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-11-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139255371","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-14DOI: 10.3389/frvir.2023.1294482
Hélène Buche, Aude Michel, Nathalie Blanc
Objectives: Our study is a follow-up of a previous research study that was carried out in physiotherapy. The present study aims to evaluate the effectiveness of virtual reality (VR) as a tool to support emotional management during the acute phase of breast cancer treatment (chemotherapy session). Materials and methods: A quasi-experimental protocol was implemented in an oncology department with 120 patients randomly assigned to one of four conditions that were being compared. During the first 10 minutes of a chemotherapy session, patients could either be exposed to a participatory immersion in a natural environment; or be placed in a contemplative immersion condition in the same environment; or listen to classical music; or receive no distraction. The involvement of the patients in the virtual environment and the relevance of the immersive modalities were measured through the evaluation of sense of presence. Particular interest was given to the evaluation of anxiety level and the emotional state of the patients. Results: VR during chemotherapy reduces anxiety and calms emotional tension. The multi-sensory nature of this emotional regulation support tool was more effective than music in inducing positive emotion, and this benefit was the most salient when immersion was offered in an interactive format. Conclusion: The relevance of providing support through VR in oncology is confirmed in this study. This tool can compensate for the fluctuating availability of caregivers by offering patients the possibility of shaping their own relaxing worlds and could help preserve the patient-caregiver relationship.
{"title":"When virtual reality supports patients’ emotional management in chemotherapy","authors":"Hélène Buche, Aude Michel, Nathalie Blanc","doi":"10.3389/frvir.2023.1294482","DOIUrl":"https://doi.org/10.3389/frvir.2023.1294482","url":null,"abstract":"Objectives: Our study is a follow-up of a previous research study that was carried out in physiotherapy. The present study aims to evaluate the effectiveness of virtual reality (VR) as a tool to support emotional management during the acute phase of breast cancer treatment (chemotherapy session). Materials and methods: A quasi-experimental protocol was implemented in an oncology department with 120 patients randomly assigned to one of four conditions that were being compared. During the first 10 minutes of a chemotherapy session, patients could either be exposed to a participatory immersion in a natural environment; or be placed in a contemplative immersion condition in the same environment; or listen to classical music; or receive no distraction. The involvement of the patients in the virtual environment and the relevance of the immersive modalities were measured through the evaluation of sense of presence. Particular interest was given to the evaluation of anxiety level and the emotional state of the patients. Results: VR during chemotherapy reduces anxiety and calms emotional tension. The multi-sensory nature of this emotional regulation support tool was more effective than music in inducing positive emotion, and this benefit was the most salient when immersion was offered in an interactive format. Conclusion: The relevance of providing support through VR in oncology is confirmed in this study. This tool can compensate for the fluctuating availability of caregivers by offering patients the possibility of shaping their own relaxing worlds and could help preserve the patient-caregiver relationship.","PeriodicalId":73116,"journal":{"name":"Frontiers in virtual reality","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134993598","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-10DOI: 10.3389/frvir.2023.1272234
Rack, Christian, Fernando, Tamara, Yalcin, Murat, Hotho, Andreas, Latoschik, Marc Erich
This article presents a new dataset containing motion and physiological data of users playing the game "Half-Life: Alyx". The dataset specifically targets behavioral and biometric identification of XR users. It includes motion and eye-tracking data captured by a HTC Vive Pro of 71 users playing the game on two separate days for 45 minutes. Additionally, we collected physiological data from 31 of these users. We provide benchmark performances for the task of motion-based identification of XR users with two prominent state-of-the-art deep learning architectures (GRU and CNN). After training on the first session of each user, the best model can identify the 71 users in the second session with a mean accuracy of 95% within 2 minutes. The dataset is freely available under https://github.com/cschell/who-is-alyx
{"title":"Who is Alyx? A new behavioral biometric dataset for user identification in XR","authors":"Rack, Christian, Fernando, Tamara, Yalcin, Murat, Hotho, Andreas, Latoschik, Marc Erich","doi":"10.3389/frvir.2023.1272234","DOIUrl":"https://doi.org/10.3389/frvir.2023.1272234","url":null,"abstract":"This article presents a new dataset containing motion and physiological data of users playing the game \"Half-Life: Alyx\". The dataset specifically targets behavioral and biometric identification of XR users. It includes motion and eye-tracking data captured by a HTC Vive Pro of 71 users playing the game on two separate days for 45 minutes. Additionally, we collected physiological data from 31 of these users. We provide benchmark performances for the task of motion-based identification of XR users with two prominent state-of-the-art deep learning architectures (GRU and CNN). After training on the first session of each user, the best model can identify the 71 users in the second session with a mean accuracy of 95% within 2 minutes. The dataset is freely available under https://github.com/cschell/who-is-alyx","PeriodicalId":73116,"journal":{"name":"Frontiers in virtual reality","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135087587","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Virtual Reality (VR) environments have been proven useful in memory assessment and have shown to be more sensitive than pen-and-paper in prospective memory assessment. Moreover, these techniques provide the advantage of offering neuropsychological evaluations in a controlled, ecologically valid, and safe manner. In the present study, we used Enhance VR, a cognitive training and assessment tool in virtual reality. User performance was evaluated by means of the in-game scoring system. The primary goal of this study was to compare Enhance VR in-game scoring to already existing validated cognitive assessment tests. As a secondary goal, we tested the tolerance and usability of the system. 41 older adults took part in the study (mean age = 62.8 years). Each participant was evaluated with a predefined set of traditional pen-and-paper cognitive assessment tools and played four VR games. We failed to find a significant positive impact in explaining the variability of the Enhance VR game scores by the traditional pen-and-paper methodologies that addressed the same cognitive ability. This lack of effect may be related to the gamified environment of Enhance VR, where the players are awarded or subtracted points depending on their game performance, thus deviating from the scoring system used in traditional methodologies. Moreover, while the games were inspired by traditional assessment methodologies, presenting them in a VR environment might modify the processing of the information provided to the participant. The hardware and Enhance VR games were extremely well tolerated, intuitive, and within the reach of even those with no experience.
{"title":"Evaluating cognitive performance using virtual reality gamified exercises","authors":"Davide Borghetti, Carlotta Zanobini, Ilenia Natola, Saverio Ottino, Angela Parenti, Victòria Brugada-Ramentol, Hossein Jalali, Amir Bozorgzadeh","doi":"10.3389/frvir.2023.1153145","DOIUrl":"https://doi.org/10.3389/frvir.2023.1153145","url":null,"abstract":"Virtual Reality (VR) environments have been proven useful in memory assessment and have shown to be more sensitive than pen-and-paper in prospective memory assessment. Moreover, these techniques provide the advantage of offering neuropsychological evaluations in a controlled, ecologically valid, and safe manner. In the present study, we used Enhance VR, a cognitive training and assessment tool in virtual reality. User performance was evaluated by means of the in-game scoring system. The primary goal of this study was to compare Enhance VR in-game scoring to already existing validated cognitive assessment tests. As a secondary goal, we tested the tolerance and usability of the system. 41 older adults took part in the study (mean age = 62.8 years). Each participant was evaluated with a predefined set of traditional pen-and-paper cognitive assessment tools and played four VR games. We failed to find a significant positive impact in explaining the variability of the Enhance VR game scores by the traditional pen-and-paper methodologies that addressed the same cognitive ability. This lack of effect may be related to the gamified environment of Enhance VR, where the players are awarded or subtracted points depending on their game performance, thus deviating from the scoring system used in traditional methodologies. Moreover, while the games were inspired by traditional assessment methodologies, presenting them in a VR environment might modify the processing of the information provided to the participant. The hardware and Enhance VR games were extremely well tolerated, intuitive, and within the reach of even those with no experience.","PeriodicalId":73116,"journal":{"name":"Frontiers in virtual reality","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135476381","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Introduction: Incorporating an additional limb that synchronizes with multiple body parts enables the user to achieve high task accuracy and smooth movement. In this case, the visual appearance of the wearable robotic limb contributes to the sense of embodiment. Additionally, the user’s motor function changes as a result of this embodiment. However, it remains unclear how users perceive the attribution of the wearable robotic limb within the context of multiple body parts (perceptual attribution), and the impact of visual similarity in this context remains unknown. Methods: This study investigated the perceptual attribution of a virtual robotic limb by examining proprioceptive drift and the bias of visual similarity under the conditions of single body part (synchronizing with hand or foot motion only) and multiple body parts (synchronizing with average motion of hand and foot). Participants in the conducted experiment engaged in a point-to-point task using a virtual robotic limb that synchronizes with their hand and foot motions simultaneously. Furthermore, the visual appearance of the end-effector was altered to explore the influence of visual similarity. Results: The experiment revealed that only the participants’ proprioception of their foot aligned with the virtual robotic limb, while the frequency of error correction during the point-to-point task did not change across conditions. Conversely, subjective illusions of embodiment occurred for both the hand and foot. In this case, the visual appearance of the robotic limbs contributed to the correlations between hand and foot proprioceptive drift and subjective embodiment illusion, respectively. Discussion: These results suggest that proprioception is specifically attributed to the foot through motion synchronization, whereas subjective perceptions are attributed to both the hand and foot.
{"title":"Investigating the perceptual attribution of a virtual robotic limb synchronizing with hand and foot simultaneously","authors":"Kuniharu Sakurada, Ryota Kondo, Fumihiko Nakamura, Michiteru Kitazaki, Maki Sugimoto","doi":"10.3389/frvir.2023.1210303","DOIUrl":"https://doi.org/10.3389/frvir.2023.1210303","url":null,"abstract":"Introduction: Incorporating an additional limb that synchronizes with multiple body parts enables the user to achieve high task accuracy and smooth movement. In this case, the visual appearance of the wearable robotic limb contributes to the sense of embodiment. Additionally, the user’s motor function changes as a result of this embodiment. However, it remains unclear how users perceive the attribution of the wearable robotic limb within the context of multiple body parts (perceptual attribution), and the impact of visual similarity in this context remains unknown. Methods: This study investigated the perceptual attribution of a virtual robotic limb by examining proprioceptive drift and the bias of visual similarity under the conditions of single body part (synchronizing with hand or foot motion only) and multiple body parts (synchronizing with average motion of hand and foot). Participants in the conducted experiment engaged in a point-to-point task using a virtual robotic limb that synchronizes with their hand and foot motions simultaneously. Furthermore, the visual appearance of the end-effector was altered to explore the influence of visual similarity. Results: The experiment revealed that only the participants’ proprioception of their foot aligned with the virtual robotic limb, while the frequency of error correction during the point-to-point task did not change across conditions. Conversely, subjective illusions of embodiment occurred for both the hand and foot. In this case, the visual appearance of the robotic limbs contributed to the correlations between hand and foot proprioceptive drift and subjective embodiment illusion, respectively. Discussion: These results suggest that proprioception is specifically attributed to the foot through motion synchronization, whereas subjective perceptions are attributed to both the hand and foot.","PeriodicalId":73116,"journal":{"name":"Frontiers in virtual reality","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135476099","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-26DOI: 10.3389/frvir.2023.1253155
Ramy Kirollos, Chris M. Herdman
Introduction: The present study sets out to determine which sensory system mostly influences self-motion perception when visual and vestibular cues are in conflict. We paired caloric vestibular stimulation that signaled motion in either the clockwise or counter-clockwise direction with a visual display that indicated self-rotation in either the same or opposite directions. Methods: In Experiment 1 (E1), caloric vestibular stimulation was used to produce vestibular circular vection. In Experiment 2 (E2), a virtual optokinetic drum was used to produce visual circular vection in a VR headset. Vection speed, direction, and duration were recorded using a potentiometer knob the participant controlled in E1 and E2. In Experiment 3 (E3), visual and vestibular stimuli were matched to be at approximately equal speeds across visual and vestibular modalities for each participant setting up Experiment 4 (E4). In E4, participants observed a moving visual pattern in a virtual reality (VR) headset while receiving caloric vestibular stimulation. Participants rotated the potentiometer knob while attending to visual–vestibular stimuli presentations to indicate their perceived circular vection. E4 had two conditions: 1) A congruent condition where calorics and visual display indicated circular vection in the same direction; 2) an incongruent condition where calorics and visual display indicated circular vection in opposite directions. Results and discussion: There were equal reports of knob rotation in the direction consistent with the visual and vestibular self-rotation direction in the incongruent condition of E4 across trials. There were no significant differences in knob rotation speed and duration in both conditions. These results demonstrate that the brain appears to weigh visual and vestibular cues equally during a visual–vestibular conflict of approximately equal speeds. These results are most consistent with the optimal cue integration hypothesis.
{"title":"Visual–vestibular sensory integration during congruent and incongruent self-rotation percepts using caloric vestibular stimulation","authors":"Ramy Kirollos, Chris M. Herdman","doi":"10.3389/frvir.2023.1253155","DOIUrl":"https://doi.org/10.3389/frvir.2023.1253155","url":null,"abstract":"Introduction: The present study sets out to determine which sensory system mostly influences self-motion perception when visual and vestibular cues are in conflict. We paired caloric vestibular stimulation that signaled motion in either the clockwise or counter-clockwise direction with a visual display that indicated self-rotation in either the same or opposite directions. Methods: In Experiment 1 (E1), caloric vestibular stimulation was used to produce vestibular circular vection. In Experiment 2 (E2), a virtual optokinetic drum was used to produce visual circular vection in a VR headset. Vection speed, direction, and duration were recorded using a potentiometer knob the participant controlled in E1 and E2. In Experiment 3 (E3), visual and vestibular stimuli were matched to be at approximately equal speeds across visual and vestibular modalities for each participant setting up Experiment 4 (E4). In E4, participants observed a moving visual pattern in a virtual reality (VR) headset while receiving caloric vestibular stimulation. Participants rotated the potentiometer knob while attending to visual–vestibular stimuli presentations to indicate their perceived circular vection. E4 had two conditions: 1) A congruent condition where calorics and visual display indicated circular vection in the same direction; 2) an incongruent condition where calorics and visual display indicated circular vection in opposite directions. Results and discussion: There were equal reports of knob rotation in the direction consistent with the visual and vestibular self-rotation direction in the incongruent condition of E4 across trials. There were no significant differences in knob rotation speed and duration in both conditions. These results demonstrate that the brain appears to weigh visual and vestibular cues equally during a visual–vestibular conflict of approximately equal speeds. These results are most consistent with the optimal cue integration hypothesis.","PeriodicalId":73116,"journal":{"name":"Frontiers in virtual reality","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134906107","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-23DOI: 10.3389/frvir.2023.1250823
S. M. Ali Mousavi, Wendy Powell, Max M. Louwerse, Andrew T. Hendrickson
Introduction: There is a rising interest in using virtual reality (VR) applications in learning, yet different studies have reported different findings for their impact and effectiveness. The current paper addresses this heterogeneity in the results. Moreover, contrary to most studies, we use a VR application actually used in industry thereby addressing ecological validity of the findings. Methods and Results of Study1: In two studies, we explored the effects of an industrial VR safety training application on learning. In our first study, we examined both interactive VR and passive monitor viewing. Using univariate, comparative, and correlational analytical approaches, the study demonstrated a significant increase in self-efficacy and knowledge scores in interactive VR but showed no significant differences when compared to passive monitor viewing. Unlike passive monitor viewing, however, the VR condition showed a positive relation between learning gains and self-efficacy. Methods and Results of Study2: In our subsequent study, a Structural Equation Model (SEM) demonstrated that self-efficacy and users’ simulation performance predicted the learning gains in VR. We furthermore found that the VR hardware experience indirectly predicted learning gains through self-efficacy and user simulation performance factors. Conclusion/Discussion of both studies: Conclusively, the findings of these studies suggest the central role of self-efficacy to explain learning gains generalizes from academic VR tasks to those in use in industry training. In addition, these results point to VR behavioral markers that are indicative of learning.
{"title":"Behavior and self-efficacy modulate learning in virtual reality simulations for training: a structural equation modeling approach","authors":"S. M. Ali Mousavi, Wendy Powell, Max M. Louwerse, Andrew T. Hendrickson","doi":"10.3389/frvir.2023.1250823","DOIUrl":"https://doi.org/10.3389/frvir.2023.1250823","url":null,"abstract":"Introduction: There is a rising interest in using virtual reality (VR) applications in learning, yet different studies have reported different findings for their impact and effectiveness. The current paper addresses this heterogeneity in the results. Moreover, contrary to most studies, we use a VR application actually used in industry thereby addressing ecological validity of the findings. Methods and Results of Study1: In two studies, we explored the effects of an industrial VR safety training application on learning. In our first study, we examined both interactive VR and passive monitor viewing. Using univariate, comparative, and correlational analytical approaches, the study demonstrated a significant increase in self-efficacy and knowledge scores in interactive VR but showed no significant differences when compared to passive monitor viewing. Unlike passive monitor viewing, however, the VR condition showed a positive relation between learning gains and self-efficacy. Methods and Results of Study2: In our subsequent study, a Structural Equation Model (SEM) demonstrated that self-efficacy and users’ simulation performance predicted the learning gains in VR. We furthermore found that the VR hardware experience indirectly predicted learning gains through self-efficacy and user simulation performance factors. Conclusion/Discussion of both studies: Conclusively, the findings of these studies suggest the central role of self-efficacy to explain learning gains generalizes from academic VR tasks to those in use in industry training. In addition, these results point to VR behavioral markers that are indicative of learning.","PeriodicalId":73116,"journal":{"name":"Frontiers in virtual reality","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135366365","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-19DOI: 10.3389/frvir.2023.1259263
Kamilla Pedersen, Peter Musaeus
Virtual Reality has emerged as a valuable tool in medical education, primarily for teaching basic sciences and procedural skills. However, its potential in clinical psychiatry, particularly in comprehending the subjective experiences of individuals with mental illness, remains largely untapped. This paper aims to address this gap by proposing a phenomenological-driven approach to the design of virtual reality in psychiatry education. Insights into psychopathology, which involves the systematic study of abnormal experiences as well as self-awareness on behalf of the clinician, demands training. The clinician must develop sensitivity, observational skills, and an understanding of patients’ subjective experiences. While integrating the subjective perspective and promoting emotional self-awareness in psychiatry education have been recommended, further research is necessary to effectively harness virtual reality for this purpose. Drawing from the convergence of virtual reality, phenomenological approaches to grasping subjectivity and psychopathology, this paper aims to advance teachings in psychopathology. It underscores the importance of integrating biomedical knowledge with the lived experiences of psychiatric patients to offer learners a comprehensive understanding of clinical psychiatry. This approach is deeply rooted in the theories of three influential figures: Karl Jaspers, a German psychiatrist and philosopher, who emphasized the role of phenomenology in clinical psychiatry; Ludwig Binswanger, a Swiss psychiatrist and psychotherapist, known for his work on existential analysis; and Medard Boss, a Swiss psychiatrist and psychoanalyst, who introduced Daseinsanalysis, focusing on the individual’s existence in the world. To facilitate learning in acute psychiatry, a virtual reality scenario was developed. This scenario offers two perspectives: one from the patient’s viewpoint, simulating a severe psychotic incident, and the other from the perspective of junior doctors, exposing them to the challenges of communication, decision-making, and stress in a clinical setting. This paper argues that these phenomenological approaches are valuable in helping inform the didactical considerations in the design of the virtual reality scenario, enhancing the learning experience in psychiatry education. It highlights the potential of virtual reality to deepen understanding in the teaching of clinical psychiatry and provides practical insights into its application in an educational context.
{"title":"A phenomenological approach to virtual reality in psychiatry education","authors":"Kamilla Pedersen, Peter Musaeus","doi":"10.3389/frvir.2023.1259263","DOIUrl":"https://doi.org/10.3389/frvir.2023.1259263","url":null,"abstract":"Virtual Reality has emerged as a valuable tool in medical education, primarily for teaching basic sciences and procedural skills. However, its potential in clinical psychiatry, particularly in comprehending the subjective experiences of individuals with mental illness, remains largely untapped. This paper aims to address this gap by proposing a phenomenological-driven approach to the design of virtual reality in psychiatry education. Insights into psychopathology, which involves the systematic study of abnormal experiences as well as self-awareness on behalf of the clinician, demands training. The clinician must develop sensitivity, observational skills, and an understanding of patients’ subjective experiences. While integrating the subjective perspective and promoting emotional self-awareness in psychiatry education have been recommended, further research is necessary to effectively harness virtual reality for this purpose. Drawing from the convergence of virtual reality, phenomenological approaches to grasping subjectivity and psychopathology, this paper aims to advance teachings in psychopathology. It underscores the importance of integrating biomedical knowledge with the lived experiences of psychiatric patients to offer learners a comprehensive understanding of clinical psychiatry. This approach is deeply rooted in the theories of three influential figures: Karl Jaspers, a German psychiatrist and philosopher, who emphasized the role of phenomenology in clinical psychiatry; Ludwig Binswanger, a Swiss psychiatrist and psychotherapist, known for his work on existential analysis; and Medard Boss, a Swiss psychiatrist and psychoanalyst, who introduced Daseinsanalysis, focusing on the individual’s existence in the world. To facilitate learning in acute psychiatry, a virtual reality scenario was developed. This scenario offers two perspectives: one from the patient’s viewpoint, simulating a severe psychotic incident, and the other from the perspective of junior doctors, exposing them to the challenges of communication, decision-making, and stress in a clinical setting. This paper argues that these phenomenological approaches are valuable in helping inform the didactical considerations in the design of the virtual reality scenario, enhancing the learning experience in psychiatry education. It highlights the potential of virtual reality to deepen understanding in the teaching of clinical psychiatry and provides practical insights into its application in an educational context.","PeriodicalId":73116,"journal":{"name":"Frontiers in virtual reality","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-10-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135779794","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-17DOI: 10.3389/frvir.2023.1119238
Héloïse Baillet, Simone Burin-Chu, Laure Lejeune, Morgan Le Chénéchal, Régis Thouvarecq, Nicolas Benguigui, Pascale Leconte
Objective: The aim of the present study was to evaluate the impact of different task constraints on the participants’ adaptation when performing a 3D visuomotor tracking task in a virtual environment. Methods: Twenty-three voluntary participants were tested with the HTC Vive Pro Eye VR headset in a task that consisted of tracking a virtual target moving in a cube with an effector controlled with the preferred hand. Participants had to perform 120 trials according to three task constraints (i.e., gain, size, and speed), each performed according to four randomized conditions. The target-effector distance and elbow range of movement were measured. Results: The results showed an increase in the distance to the target when the task constraints were the strongest. In addition, a change in movement kinematics was observed, involving an increase in elbow amplitude as task constraints increased. It also appeared that the depth dimension played a major role in task difficulty and elbow amplitude and coupling in the tracking task. Conclusion: This research is an essential step towards characterizing interactions with a 3D virtual environment and showing how virtual constraints can facilitate arm’s involvement in the depth dimension.
目的:本研究的目的是评估不同的任务约束对参与者在虚拟环境中执行三维视觉运动跟踪任务的适应的影响。方法:23名自愿参与者使用HTC Vive Pro Eye VR头显进行测试,任务包括跟踪在立方体中移动的虚拟目标,用首选的手控制效应器。参与者必须根据三个任务约束(即,增益,大小和速度)进行120次试验,每个试验根据四个随机条件进行。测量目标-效应器距离和肘部运动范围。结果:任务约束最强时,被试与目标的距离增加。此外,观察到运动运动学的变化,包括随着任务约束的增加肘关节振幅的增加。在跟踪任务中,深度维度对任务难度和肘部振幅和耦合也起主要作用。结论:这项研究是刻画与3D虚拟环境交互的重要一步,并展示了虚拟约束如何促进手臂在深度维度的参与。
{"title":"Impact of task constraints on a 3D visuomotor tracking task in virtual reality","authors":"Héloïse Baillet, Simone Burin-Chu, Laure Lejeune, Morgan Le Chénéchal, Régis Thouvarecq, Nicolas Benguigui, Pascale Leconte","doi":"10.3389/frvir.2023.1119238","DOIUrl":"https://doi.org/10.3389/frvir.2023.1119238","url":null,"abstract":"Objective: The aim of the present study was to evaluate the impact of different task constraints on the participants’ adaptation when performing a 3D visuomotor tracking task in a virtual environment. Methods: Twenty-three voluntary participants were tested with the HTC Vive Pro Eye VR headset in a task that consisted of tracking a virtual target moving in a cube with an effector controlled with the preferred hand. Participants had to perform 120 trials according to three task constraints (i.e., gain, size, and speed), each performed according to four randomized conditions. The target-effector distance and elbow range of movement were measured. Results: The results showed an increase in the distance to the target when the task constraints were the strongest. In addition, a change in movement kinematics was observed, involving an increase in elbow amplitude as task constraints increased. It also appeared that the depth dimension played a major role in task difficulty and elbow amplitude and coupling in the tracking task. Conclusion: This research is an essential step towards characterizing interactions with a 3D virtual environment and showing how virtual constraints can facilitate arm’s involvement in the depth dimension.","PeriodicalId":73116,"journal":{"name":"Frontiers in virtual reality","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135994887","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-09DOI: 10.3389/frvir.2023.1190426
Michael Bonfert, Maiko Hübinger, Rainer Malaka
Some virtual reality (VR) applications require true-to-life object manipulation, such as for training or teleoperation. We investigate an interaction technique that replicates the variable grip strength applied to a held object when using force-feedback gloves in VR. We map the exerted finger pressure to the rotational freedom of the virtual object. With a firm grip, the object’s orientation is fixed to the hand. With a loose grip, the user can allow the object to rotate freely within the hand. A user study ( N = 21) showed how challenging it was for participants to control the object’s rotation with our prototype employing the SenseGlove DK1. Despite high action fidelity, the grip variability led to poorer performance and increased task load compared to the default fixed rotation. We suspect low haptic fidelity as an explanation as only kinesthetic forces but no cutaneous cues are rendered. We discuss the system design limitations and how to overcome them in future haptic interfaces for physics-based multi-finger object manipulation.
{"title":"Challenges of controlling the rotation of virtual objects with variable grip using force-feedback gloves","authors":"Michael Bonfert, Maiko Hübinger, Rainer Malaka","doi":"10.3389/frvir.2023.1190426","DOIUrl":"https://doi.org/10.3389/frvir.2023.1190426","url":null,"abstract":"Some virtual reality (VR) applications require true-to-life object manipulation, such as for training or teleoperation. We investigate an interaction technique that replicates the variable grip strength applied to a held object when using force-feedback gloves in VR. We map the exerted finger pressure to the rotational freedom of the virtual object. With a firm grip, the object’s orientation is fixed to the hand. With a loose grip, the user can allow the object to rotate freely within the hand. A user study ( N = 21) showed how challenging it was for participants to control the object’s rotation with our prototype employing the SenseGlove DK1. Despite high action fidelity, the grip variability led to poorer performance and increased task load compared to the default fixed rotation. We suspect low haptic fidelity as an explanation as only kinesthetic forces but no cutaneous cues are rendered. We discuss the system design limitations and how to overcome them in future haptic interfaces for physics-based multi-finger object manipulation.","PeriodicalId":73116,"journal":{"name":"Frontiers in virtual reality","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135095109","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}