Pub Date : 2023-12-05DOI: 10.3389/frvir.2023.1141683
Martin Guy, Jean-Marie Normand, Camille Jeunet-Kelway, Guillaume Moreau
The sense of embodiment refers to the sensations of being inside, having, and controlling a body. In virtual reality, it is possible to substitute a person’s body with a virtual body, referred to as an avatar. Modulations of the sense of embodiment through modifications of this avatar have perceptual and behavioural consequences on users that can influence the way users interact with the virtual environment. Therefore, it is essential to define metrics that enable a reliable assessment of the sense of embodiment in virtual reality to better understand its dimensions, the way they interact, and their influence on the quality of interaction in the virtual environment. In this review, we first introduce the current knowledge on the sense of embodiment, its dimensions (senses of agency, body ownership, and self-location), and how they relate the ones with the others. Then, we dive into the different methods currently used to assess the sense of embodiment, ranging from questionnaires to neurophysiological measures. We provide a critical analysis of the existing metrics, discussing their advantages and drawbacks in the context of virtual reality. Notably, we argue that real-time measures of embodiment, which are also specific and do not require double tasking, are the most relevant in the context of virtual reality. Electroencephalography seems a good candidate for the future if its drawbacks (such as its sensitivity to movement and practicality) are improved. While the perfect metric has yet to be identified if it exists, this work provides clues on which metric to choose depending on the context, which should hopefully contribute to better assessing and understanding the sense of embodiment in virtual reality.
{"title":"The sense of embodiment in Virtual Reality and its assessment methods","authors":"Martin Guy, Jean-Marie Normand, Camille Jeunet-Kelway, Guillaume Moreau","doi":"10.3389/frvir.2023.1141683","DOIUrl":"https://doi.org/10.3389/frvir.2023.1141683","url":null,"abstract":"The sense of embodiment refers to the sensations of being inside, having, and controlling a body. In virtual reality, it is possible to substitute a person’s body with a virtual body, referred to as an avatar. Modulations of the sense of embodiment through modifications of this avatar have perceptual and behavioural consequences on users that can influence the way users interact with the virtual environment. Therefore, it is essential to define metrics that enable a reliable assessment of the sense of embodiment in virtual reality to better understand its dimensions, the way they interact, and their influence on the quality of interaction in the virtual environment. In this review, we first introduce the current knowledge on the sense of embodiment, its dimensions (senses of agency, body ownership, and self-location), and how they relate the ones with the others. Then, we dive into the different methods currently used to assess the sense of embodiment, ranging from questionnaires to neurophysiological measures. We provide a critical analysis of the existing metrics, discussing their advantages and drawbacks in the context of virtual reality. Notably, we argue that real-time measures of embodiment, which are also specific and do not require double tasking, are the most relevant in the context of virtual reality. Electroencephalography seems a good candidate for the future if its drawbacks (such as its sensitivity to movement and practicality) are improved. While the perfect metric has yet to be identified if it exists, this work provides clues on which metric to choose depending on the context, which should hopefully contribute to better assessing and understanding the sense of embodiment in virtual reality.","PeriodicalId":73116,"journal":{"name":"Frontiers in virtual reality","volume":"108 5","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138599749","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-12-04DOI: 10.3389/frvir.2023.1210211
Andre Paradise, S. Surve, Jovan C. Menezes, Madhav Gupta, Vaibhav Bisht, Kyung Rak Jang, Cong Liu, Suming Qiu, Junyi Dong, Jane Shin, Silvia Ferrari
Today’s research on human-robot teaming requires the ability to test artificial intelligence (AI) algorithms for perception and decision-making in complex real-world environments. Field experiments, also referred to as experiments “in the wild,” do not provide the level of detailed ground truth necessary for thorough performance comparisons and validation. Experiments on pre-recorded real-world data sets are also significantly limited in their usefulness because they do not allow researchers to test the effectiveness of active robot perception and control or decision strategies in the loop. Additionally, research on large human-robot teams requires tests and experiments that are too costly even for the industry and may result in considerable time losses when experiments go awry. The novel Real-Time Human Autonomous Systems Collaborations (RealTHASC) facility at Cornell University interfaces real and virtual robots and humans with photorealistic simulated environments by implementing new concepts for the seamless integration of wearable sensors, motion capture, physics-based simulations, robot hardware and virtual reality (VR). The result is an extended reality (XR) testbed by which real robots and humans in the laboratory are able to experience virtual worlds, inclusive of virtual agents, through real-time visual feedback and interaction. VR body tracking by DeepMotion is employed in conjunction with the OptiTrack motion capture system to transfer every human subject and robot in the real physical laboratory space into a synthetic virtual environment, thereby constructing corresponding human/robot avatars that not only mimic the behaviors of the real agents but also experience the virtual world through virtual sensors and transmit the sensor data back to the real human/robot agent, all in real time. New cross-domain synthetic environments are created in RealTHASC using Unreal Engine™, bridging the simulation-to-reality gap and allowing for the inclusion of underwater/ground/aerial autonomous vehicles, each equipped with a multi-modal sensor suite. The experimental capabilities offered by RealTHASC are demonstrated through three case studies showcasing mixed real/virtual human/robot interactions in diverse domains, leveraging and complementing the benefits of experimentation in simulation and in the real world.
{"title":"RealTHASC—a cyber-physical XR testbed for AI-supported real-time human autonomous systems collaborations","authors":"Andre Paradise, S. Surve, Jovan C. Menezes, Madhav Gupta, Vaibhav Bisht, Kyung Rak Jang, Cong Liu, Suming Qiu, Junyi Dong, Jane Shin, Silvia Ferrari","doi":"10.3389/frvir.2023.1210211","DOIUrl":"https://doi.org/10.3389/frvir.2023.1210211","url":null,"abstract":"Today’s research on human-robot teaming requires the ability to test artificial intelligence (AI) algorithms for perception and decision-making in complex real-world environments. Field experiments, also referred to as experiments “in the wild,” do not provide the level of detailed ground truth necessary for thorough performance comparisons and validation. Experiments on pre-recorded real-world data sets are also significantly limited in their usefulness because they do not allow researchers to test the effectiveness of active robot perception and control or decision strategies in the loop. Additionally, research on large human-robot teams requires tests and experiments that are too costly even for the industry and may result in considerable time losses when experiments go awry. The novel Real-Time Human Autonomous Systems Collaborations (RealTHASC) facility at Cornell University interfaces real and virtual robots and humans with photorealistic simulated environments by implementing new concepts for the seamless integration of wearable sensors, motion capture, physics-based simulations, robot hardware and virtual reality (VR). The result is an extended reality (XR) testbed by which real robots and humans in the laboratory are able to experience virtual worlds, inclusive of virtual agents, through real-time visual feedback and interaction. VR body tracking by DeepMotion is employed in conjunction with the OptiTrack motion capture system to transfer every human subject and robot in the real physical laboratory space into a synthetic virtual environment, thereby constructing corresponding human/robot avatars that not only mimic the behaviors of the real agents but also experience the virtual world through virtual sensors and transmit the sensor data back to the real human/robot agent, all in real time. New cross-domain synthetic environments are created in RealTHASC using Unreal Engine™, bridging the simulation-to-reality gap and allowing for the inclusion of underwater/ground/aerial autonomous vehicles, each equipped with a multi-modal sensor suite. The experimental capabilities offered by RealTHASC are demonstrated through three case studies showcasing mixed real/virtual human/robot interactions in diverse domains, leveraging and complementing the benefits of experimentation in simulation and in the real world.","PeriodicalId":73116,"journal":{"name":"Frontiers in virtual reality","volume":"37 10","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138602472","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-30DOI: 10.3389/frvir.2023.1291516
Yuki Harada, Makoto Wada
A head-mounted display could potentially restrict users’ visual fields and thereby impair their spatial cognitive ability. Spatial cognition can be assisted with immersive visual guidance. However, whether this technique is useful for individuals with autism-spectrum disorder (ASD) remains unclear. Given the recent virtual reality (VR) contents targeting individuals with ASD, the relationship between ASD-related traits and the effectiveness of immersive visual guidance should be clarified. This pilot study evaluated how ASD-related traits (autistic traits and empathizing–systemizing cognitive styles) among typically developing individuals are related to the effectiveness of visual guidance. Participants performed visual search and spatial localization tasks while using immersive visual guidance. In the visual search task, participants searched immersive VR environments for a target object and pushed a button according to the target color as quickly as possible. In the localization task, they viewed immersive visual guidance for a short duration and localized the guided direction via a controller. Results showed that visual search times were hastened with systemizing cognition. However, ASD-related traits were not significantly related to localization accuracy. These findings suggest that immersive visual guidance is generally useful for individuals with higher ASD-related traits.
{"title":"Autism-related traits are related to effectiveness of immersive visual guidance on spatial cognitive ability: a pilot study","authors":"Yuki Harada, Makoto Wada","doi":"10.3389/frvir.2023.1291516","DOIUrl":"https://doi.org/10.3389/frvir.2023.1291516","url":null,"abstract":"A head-mounted display could potentially restrict users’ visual fields and thereby impair their spatial cognitive ability. Spatial cognition can be assisted with immersive visual guidance. However, whether this technique is useful for individuals with autism-spectrum disorder (ASD) remains unclear. Given the recent virtual reality (VR) contents targeting individuals with ASD, the relationship between ASD-related traits and the effectiveness of immersive visual guidance should be clarified. This pilot study evaluated how ASD-related traits (autistic traits and empathizing–systemizing cognitive styles) among typically developing individuals are related to the effectiveness of visual guidance. Participants performed visual search and spatial localization tasks while using immersive visual guidance. In the visual search task, participants searched immersive VR environments for a target object and pushed a button according to the target color as quickly as possible. In the localization task, they viewed immersive visual guidance for a short duration and localized the guided direction via a controller. Results showed that visual search times were hastened with systemizing cognition. However, ASD-related traits were not significantly related to localization accuracy. These findings suggest that immersive visual guidance is generally useful for individuals with higher ASD-related traits.","PeriodicalId":73116,"journal":{"name":"Frontiers in virtual reality","volume":"432 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-11-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139204166","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-29DOI: 10.3389/frvir.2023.1261096
Morgan McGrath Lewis, Colin Waltz, Kathryn Scelina, Logan Scelina, Kelsey Owen, Karissa Hastilow, Mandy Miller Koop, A. Rosenfeldt, Jay L Alberts
Introduction: The successful performance of instrumental activities of daily living (IADLs) is critical in maintaining independence for older adults. Traditional IADL questionnaires and performance-based assessments are time consuming, potentially unreliable, and fail to adequately consider the interplay between cognitive and motor performance in completing IADLs. The Cleveland Clinic Virtual Reality Shopping (CC-VRS) platform was developed to objectively quantify IADL performance through the characterization of cognitive, motor, and cognitive-motor function. The CC-VRS combines an immersive virtual grocery store with an omnidirectional treadmill to create a scenario in which the user physically navigates through a virtual environment. The primary aim of this project was to determine the known-group validity of the CC-VRS platform to characterize IADL performance in healthy older adults and young adults.Methods: Twenty healthy young (n = 10) and older (n = 10) adults completed the Basic and Complex CC-VRS scenarios. Position data from VR trackers on the hands, waist, and feet were used to quantify motor performance. Cognitive and dual-task performance were automatically calculated by the application during specific shopping sub-tasks.Results: Older adults exhibited significantly worse performance on multiple cognitive, motor, and dual-task outcomes of the CC-VRS (e. g., average walking speed, number of list activations, and stopping frequency).Discussion: The CC-VRS successfully discriminated IADL performance between young and healthy older adults. The complex realistic environment of the CC-VRS, combined with simultaneous evaluation of motor and cognitive performance, has the potential to more accurately characterize IADL performance by identifying subtle functional deficits that may precede neurological disease.
{"title":"Older adults exhibit declines in instrumental activities of daily living during a virtual grocery shopping task","authors":"Morgan McGrath Lewis, Colin Waltz, Kathryn Scelina, Logan Scelina, Kelsey Owen, Karissa Hastilow, Mandy Miller Koop, A. Rosenfeldt, Jay L Alberts","doi":"10.3389/frvir.2023.1261096","DOIUrl":"https://doi.org/10.3389/frvir.2023.1261096","url":null,"abstract":"Introduction: The successful performance of instrumental activities of daily living (IADLs) is critical in maintaining independence for older adults. Traditional IADL questionnaires and performance-based assessments are time consuming, potentially unreliable, and fail to adequately consider the interplay between cognitive and motor performance in completing IADLs. The Cleveland Clinic Virtual Reality Shopping (CC-VRS) platform was developed to objectively quantify IADL performance through the characterization of cognitive, motor, and cognitive-motor function. The CC-VRS combines an immersive virtual grocery store with an omnidirectional treadmill to create a scenario in which the user physically navigates through a virtual environment. The primary aim of this project was to determine the known-group validity of the CC-VRS platform to characterize IADL performance in healthy older adults and young adults.Methods: Twenty healthy young (n = 10) and older (n = 10) adults completed the Basic and Complex CC-VRS scenarios. Position data from VR trackers on the hands, waist, and feet were used to quantify motor performance. Cognitive and dual-task performance were automatically calculated by the application during specific shopping sub-tasks.Results: Older adults exhibited significantly worse performance on multiple cognitive, motor, and dual-task outcomes of the CC-VRS (e. g., average walking speed, number of list activations, and stopping frequency).Discussion: The CC-VRS successfully discriminated IADL performance between young and healthy older adults. The complex realistic environment of the CC-VRS, combined with simultaneous evaluation of motor and cognitive performance, has the potential to more accurately characterize IADL performance by identifying subtle functional deficits that may precede neurological disease.","PeriodicalId":73116,"journal":{"name":"Frontiers in virtual reality","volume":"68 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139212059","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-28DOI: 10.3389/frvir.2023.1242587
R. K. Venkatesan, Domna Banakou, Mel Slater, Manivannan M.
Research has shown that incorporating haptics into virtual environments can increase sensory fidelity and provide powerful and immersive experiences. However, current studies on haptics in virtual interactions primarily focus on one-on-one scenarios, while kinesthetic haptic interactions in large virtual gatherings are underexplored. This study aims to investigate the impact of kinesthetic haptics on eliciting emotional responses within crowded virtual reality (VR) scenarios. Specifically, we examine the influence of type or quality of the haptic feedback on the perception of positive and negative emotions. We designed and developed different combinations of tactile and torque feedback devices and evaluated their effects on emotional responses. To achieve this, we explored different combinations of haptic feedback devices, including “No Haptic,” “Tactile Stimulus” delivering tactile cues, and “Haptic Stimulus” delivering tactile and torque cues, in combination with two immersive 360-degree video crowd scenarios, namely, “Casual Crowd” and “Aggressive Crowd.” The results suggest that varying the type or quality of haptic feedback can evoke different emotional responses in crowded VR scenarios. Participants reported increased levels of nervousness with Haptic Stimulus in both virtual scenarios, while both Tactile Stimulus and Haptic Stimulus were negatively associated with pleasantness and comfort during the interaction. Additionally, we observed that participants’ sense of touch being real was enhanced in Haptic Stimulus compared to Tactile Stimulus. The “Haptic Stimulus” condition had the most positive influence on participants’ sense of identification with the crowd.
{"title":"Haptic feedback in a virtual crowd scenario improves the emotional response","authors":"R. K. Venkatesan, Domna Banakou, Mel Slater, Manivannan M.","doi":"10.3389/frvir.2023.1242587","DOIUrl":"https://doi.org/10.3389/frvir.2023.1242587","url":null,"abstract":"Research has shown that incorporating haptics into virtual environments can increase sensory fidelity and provide powerful and immersive experiences. However, current studies on haptics in virtual interactions primarily focus on one-on-one scenarios, while kinesthetic haptic interactions in large virtual gatherings are underexplored. This study aims to investigate the impact of kinesthetic haptics on eliciting emotional responses within crowded virtual reality (VR) scenarios. Specifically, we examine the influence of type or quality of the haptic feedback on the perception of positive and negative emotions. We designed and developed different combinations of tactile and torque feedback devices and evaluated their effects on emotional responses. To achieve this, we explored different combinations of haptic feedback devices, including “No Haptic,” “Tactile Stimulus” delivering tactile cues, and “Haptic Stimulus” delivering tactile and torque cues, in combination with two immersive 360-degree video crowd scenarios, namely, “Casual Crowd” and “Aggressive Crowd.” The results suggest that varying the type or quality of haptic feedback can evoke different emotional responses in crowded VR scenarios. Participants reported increased levels of nervousness with Haptic Stimulus in both virtual scenarios, while both Tactile Stimulus and Haptic Stimulus were negatively associated with pleasantness and comfort during the interaction. Additionally, we observed that participants’ sense of touch being real was enhanced in Haptic Stimulus compared to Tactile Stimulus. The “Haptic Stimulus” condition had the most positive influence on participants’ sense of identification with the crowd.","PeriodicalId":73116,"journal":{"name":"Frontiers in virtual reality","volume":"9 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139227163","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-27DOI: 10.3389/frvir.2023.1307925
A. Maneuvrier, Ngoc-Doan-Trang Nguyen, Patrice Renaud
Introduction: This exploratory study aims to participate in the development of the VR framework by focusing on the issue of cybersickness. The main objective is to explore the possibilities of predicting cybersickness using i) field dependence-independence measures and ii) head rotations data through automatic analyses. The second objective is to assess the impact of cybersickness on visuomotor performance.Methods: 40 participants completed a 13.5-min VR immersion in a first-person shooter game. Head rotations were analyzed in both their spatial (coefficients of variations) and temporal dimensions (detrended fluctuations analyses). Exploratory correlations, linear regressions and clusters comparison (unsupervised machine learning) analyses were performed to explain cybersickness and visuomotor performance. Traditional VR human factors (sense of presence, state of flow, video game experience, age) were also integrated.Results: Results suggest that field dependence-independence measured before exposure to VR explain ¼ of the variance of cybersickness, while the Disorientation scale of the Simulator Sickness Questionnaire predicts 16.3% of the visuomotor performance. In addition, automatic analyses of head rotations during immersion revealed two different clusters of participants, one of them reporting more cybersickness than the other.Discussion: These results are discussed in terms of sensory integration and a diminution of head rotations as an avoidance behavior of negative symptoms. This study suggests that measuring field dependence-independence using the (Virtual) Rod and Frame Test before immersion and tracking head rotations using internal sensors during immersion might serve as powerful tools for VR actors.
{"title":"Predicting VR cybersickness and its impact on visuomotor performance using head rotations and field (in)dependence","authors":"A. Maneuvrier, Ngoc-Doan-Trang Nguyen, Patrice Renaud","doi":"10.3389/frvir.2023.1307925","DOIUrl":"https://doi.org/10.3389/frvir.2023.1307925","url":null,"abstract":"Introduction: This exploratory study aims to participate in the development of the VR framework by focusing on the issue of cybersickness. The main objective is to explore the possibilities of predicting cybersickness using i) field dependence-independence measures and ii) head rotations data through automatic analyses. The second objective is to assess the impact of cybersickness on visuomotor performance.Methods: 40 participants completed a 13.5-min VR immersion in a first-person shooter game. Head rotations were analyzed in both their spatial (coefficients of variations) and temporal dimensions (detrended fluctuations analyses). Exploratory correlations, linear regressions and clusters comparison (unsupervised machine learning) analyses were performed to explain cybersickness and visuomotor performance. Traditional VR human factors (sense of presence, state of flow, video game experience, age) were also integrated.Results: Results suggest that field dependence-independence measured before exposure to VR explain ¼ of the variance of cybersickness, while the Disorientation scale of the Simulator Sickness Questionnaire predicts 16.3% of the visuomotor performance. In addition, automatic analyses of head rotations during immersion revealed two different clusters of participants, one of them reporting more cybersickness than the other.Discussion: These results are discussed in terms of sensory integration and a diminution of head rotations as an avoidance behavior of negative symptoms. This study suggests that measuring field dependence-independence using the (Virtual) Rod and Frame Test before immersion and tracking head rotations using internal sensors during immersion might serve as powerful tools for VR actors.","PeriodicalId":73116,"journal":{"name":"Frontiers in virtual reality","volume":"50 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-11-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139233640","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-24DOI: 10.3389/frvir.2023.1260313
Nancy A. Baker, A. Polhemus, Megan Kenney, Rina Bloch, Nathan Ward, James Intriligator, Robert Edwards
Immersive virtual reality (IVR) is increasingly used as a treatment for chronic pain. In this crossover randomized pilot study, we examined the effect of 10- and 20-min dosages on back pain intensity, affect, and measures of pain sensitization in people with chronic back pain (CBP). Twenty-one people with CBP were seen for two visits of IVR. Participants were randomly assigned to receive either 10- or 20-min of IVR in Visit 1 and the other dosage in Visit 2. Our primary analyses were effect sizes and simple inferential comparisons for pain intensity, affect, fatigue, and measures of pain sensitization assessed using quantitative sensory testing. Overall, IVR had a moderate, significant effect in reducing back pain intensity, negative affect, and painful aftersensations. When dosage was examined, 20-min had a moderate, significant effect on pain while 10-min had a small, non-significant effect, although the between-dosage difference was non-significant. Interestingly, effects were much larger in Visit 1, particularly for 20-min, but this diminished in Visit 2, and both dosages had a smaller effect in Visit 2. We interpret these results to indicate that pain modulation may be associated with novelty and engagement that can attenuate over time if the IVR encounter is not sufficiently engaging. Moreover, that if participants are engaged in a single session, 20-min may be necessary to obtain sufficient competency with IVR, while in subsequent sessions, 10-min of IVR may be sufficient to affect pain.
{"title":"Examining the difference between 10- and 20-min of immersive virtual reality on symptoms, affect, and central sensitization in people with chronic back pain","authors":"Nancy A. Baker, A. Polhemus, Megan Kenney, Rina Bloch, Nathan Ward, James Intriligator, Robert Edwards","doi":"10.3389/frvir.2023.1260313","DOIUrl":"https://doi.org/10.3389/frvir.2023.1260313","url":null,"abstract":"Immersive virtual reality (IVR) is increasingly used as a treatment for chronic pain. In this crossover randomized pilot study, we examined the effect of 10- and 20-min dosages on back pain intensity, affect, and measures of pain sensitization in people with chronic back pain (CBP). Twenty-one people with CBP were seen for two visits of IVR. Participants were randomly assigned to receive either 10- or 20-min of IVR in Visit 1 and the other dosage in Visit 2. Our primary analyses were effect sizes and simple inferential comparisons for pain intensity, affect, fatigue, and measures of pain sensitization assessed using quantitative sensory testing. Overall, IVR had a moderate, significant effect in reducing back pain intensity, negative affect, and painful aftersensations. When dosage was examined, 20-min had a moderate, significant effect on pain while 10-min had a small, non-significant effect, although the between-dosage difference was non-significant. Interestingly, effects were much larger in Visit 1, particularly for 20-min, but this diminished in Visit 2, and both dosages had a smaller effect in Visit 2. We interpret these results to indicate that pain modulation may be associated with novelty and engagement that can attenuate over time if the IVR encounter is not sufficiently engaging. Moreover, that if participants are engaged in a single session, 20-min may be necessary to obtain sufficient competency with IVR, while in subsequent sessions, 10-min of IVR may be sufficient to affect pain.","PeriodicalId":73116,"journal":{"name":"Frontiers in virtual reality","volume":"228 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-11-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139241954","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-20DOI: 10.3389/frvir.2023.1294539
Domna Banakou, Mel Slater
Moving through a virtual environment that is larger than the physical space in which the participant operates has been a challenge since the early days of virtual reality. Many different methods have been proposed, such as joystick-based navigation, walking in place where the participant makes walking movements but is stationary in the physical space, and redirected walking where the environment is surreptitiously changed giving the illusion of walking in a long straight line in the virtual space but maybe a circle in the physical space. Each type of method has its limitations, ranging from simulator sickness to still requiring more physical space than is available. Stimulated by the COVID-19 lockdown, we developed a new method of locomotion which we refer to as interactive redirected walking. Here, the participant really walks but, when reaching a boundary, rotates the virtual world so that continuation of walking is always within the physical boundary. We carried out an exploratory study to compare this method with walking in place with respect to presence using questionnaires as well as qualitative responses based on comments written by the participants that were subjected to sentiment analysis. Surprisingly, we found that smaller physical boundaries favor interactive redirected walking, but for boundary lengths more than approximately 7 adult paces, the walking-in-place method is preferable.
{"title":"A comparison of two methods for moving through a virtual environment: walking in place and interactive redirected walking","authors":"Domna Banakou, Mel Slater","doi":"10.3389/frvir.2023.1294539","DOIUrl":"https://doi.org/10.3389/frvir.2023.1294539","url":null,"abstract":"Moving through a virtual environment that is larger than the physical space in which the participant operates has been a challenge since the early days of virtual reality. Many different methods have been proposed, such as joystick-based navigation, walking in place where the participant makes walking movements but is stationary in the physical space, and redirected walking where the environment is surreptitiously changed giving the illusion of walking in a long straight line in the virtual space but maybe a circle in the physical space. Each type of method has its limitations, ranging from simulator sickness to still requiring more physical space than is available. Stimulated by the COVID-19 lockdown, we developed a new method of locomotion which we refer to as interactive redirected walking. Here, the participant really walks but, when reaching a boundary, rotates the virtual world so that continuation of walking is always within the physical boundary. We carried out an exploratory study to compare this method with walking in place with respect to presence using questionnaires as well as qualitative responses based on comments written by the participants that were subjected to sentiment analysis. Surprisingly, we found that smaller physical boundaries favor interactive redirected walking, but for boundary lengths more than approximately 7 adult paces, the walking-in-place method is preferable.","PeriodicalId":73116,"journal":{"name":"Frontiers in virtual reality","volume":"305 4","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-11-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139255371","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-14DOI: 10.3389/frvir.2023.1294482
Hélène Buche, Aude Michel, Nathalie Blanc
Objectives: Our study is a follow-up of a previous research study that was carried out in physiotherapy. The present study aims to evaluate the effectiveness of virtual reality (VR) as a tool to support emotional management during the acute phase of breast cancer treatment (chemotherapy session). Materials and methods: A quasi-experimental protocol was implemented in an oncology department with 120 patients randomly assigned to one of four conditions that were being compared. During the first 10 minutes of a chemotherapy session, patients could either be exposed to a participatory immersion in a natural environment; or be placed in a contemplative immersion condition in the same environment; or listen to classical music; or receive no distraction. The involvement of the patients in the virtual environment and the relevance of the immersive modalities were measured through the evaluation of sense of presence. Particular interest was given to the evaluation of anxiety level and the emotional state of the patients. Results: VR during chemotherapy reduces anxiety and calms emotional tension. The multi-sensory nature of this emotional regulation support tool was more effective than music in inducing positive emotion, and this benefit was the most salient when immersion was offered in an interactive format. Conclusion: The relevance of providing support through VR in oncology is confirmed in this study. This tool can compensate for the fluctuating availability of caregivers by offering patients the possibility of shaping their own relaxing worlds and could help preserve the patient-caregiver relationship.
{"title":"When virtual reality supports patients’ emotional management in chemotherapy","authors":"Hélène Buche, Aude Michel, Nathalie Blanc","doi":"10.3389/frvir.2023.1294482","DOIUrl":"https://doi.org/10.3389/frvir.2023.1294482","url":null,"abstract":"Objectives: Our study is a follow-up of a previous research study that was carried out in physiotherapy. The present study aims to evaluate the effectiveness of virtual reality (VR) as a tool to support emotional management during the acute phase of breast cancer treatment (chemotherapy session). Materials and methods: A quasi-experimental protocol was implemented in an oncology department with 120 patients randomly assigned to one of four conditions that were being compared. During the first 10 minutes of a chemotherapy session, patients could either be exposed to a participatory immersion in a natural environment; or be placed in a contemplative immersion condition in the same environment; or listen to classical music; or receive no distraction. The involvement of the patients in the virtual environment and the relevance of the immersive modalities were measured through the evaluation of sense of presence. Particular interest was given to the evaluation of anxiety level and the emotional state of the patients. Results: VR during chemotherapy reduces anxiety and calms emotional tension. The multi-sensory nature of this emotional regulation support tool was more effective than music in inducing positive emotion, and this benefit was the most salient when immersion was offered in an interactive format. Conclusion: The relevance of providing support through VR in oncology is confirmed in this study. This tool can compensate for the fluctuating availability of caregivers by offering patients the possibility of shaping their own relaxing worlds and could help preserve the patient-caregiver relationship.","PeriodicalId":73116,"journal":{"name":"Frontiers in virtual reality","volume":"20 10","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134993598","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-10DOI: 10.3389/frvir.2023.1272234
Rack, Christian, Fernando, Tamara, Yalcin, Murat, Hotho, Andreas, Latoschik, Marc Erich
This article presents a new dataset containing motion and physiological data of users playing the game "Half-Life: Alyx". The dataset specifically targets behavioral and biometric identification of XR users. It includes motion and eye-tracking data captured by a HTC Vive Pro of 71 users playing the game on two separate days for 45 minutes. Additionally, we collected physiological data from 31 of these users. We provide benchmark performances for the task of motion-based identification of XR users with two prominent state-of-the-art deep learning architectures (GRU and CNN). After training on the first session of each user, the best model can identify the 71 users in the second session with a mean accuracy of 95% within 2 minutes. The dataset is freely available under https://github.com/cschell/who-is-alyx
{"title":"Who is Alyx? A new behavioral biometric dataset for user identification in XR","authors":"Rack, Christian, Fernando, Tamara, Yalcin, Murat, Hotho, Andreas, Latoschik, Marc Erich","doi":"10.3389/frvir.2023.1272234","DOIUrl":"https://doi.org/10.3389/frvir.2023.1272234","url":null,"abstract":"This article presents a new dataset containing motion and physiological data of users playing the game \"Half-Life: Alyx\". The dataset specifically targets behavioral and biometric identification of XR users. It includes motion and eye-tracking data captured by a HTC Vive Pro of 71 users playing the game on two separate days for 45 minutes. Additionally, we collected physiological data from 31 of these users. We provide benchmark performances for the task of motion-based identification of XR users with two prominent state-of-the-art deep learning architectures (GRU and CNN). After training on the first session of each user, the best model can identify the 71 users in the second session with a mean accuracy of 95% within 2 minutes. The dataset is freely available under https://github.com/cschell/who-is-alyx","PeriodicalId":73116,"journal":{"name":"Frontiers in virtual reality","volume":"79 11","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135087587","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}