Pub Date : 2023-12-06DOI: 10.3389/frvir.2023.1291078
Judith Josupeit
Background: Virtual Reality (VR) does not only include the use of stereoscopic images, but also possibilities for an interaction with and participation in a computer-generated environment. However, laboratory studies primarily focus on the first part of the definition only. In this context, comparing results from different VR applications with diverging goals becomes difficult. This is especially true in the field of cybersickness research (visually induced motion sickness in VR), as self-report symptom questionnaires are used. The prominent Simulator Sickness Questionnaire (SSQ) is criticized for the lack of specificity, the double factorial loadings, the outdatedness, and the unrepresentative sample. VR-specific revisions like the Virtual Reality Sickness Questionnaire (VRSQ) address these criticisms but lack generalizability.Methods: The current paper uses a Confirmatory Factor Analysis of the VRSQ with data from three different VR environments and a sample size of N = 244. The environments had different setups, visual complexities, and interaction possibilities. These characteristics influenced the factorial structure of the VRSQ as a moderator. Furthermore, to control for VR-unrelated effects Baseline ratings were taken into account.Results: The Confirmatory Factor Analysis indicated a moderate fit for the global model, but a misspecification for two of the three environments. Only the environment similar to the original VRSQ paper converged with the model.Conclusions: In conclusion, a detailed description of the VR environment is required in scientific method reports. Focusing on VR accessibility for physically impaired in addition to healthy subjects, an added Baseline measurement can address the discriminant validity. Until generalizable VR-specific revisions of the SSQ are validated, the paper suggests using the Δ-SSQ in aggregated raw format.
{"title":"Cybersickness as the virtual reality sickness questionnaire (VRSQ) measures it!? –an environment-specific revision of the VRSQ","authors":"Judith Josupeit","doi":"10.3389/frvir.2023.1291078","DOIUrl":"https://doi.org/10.3389/frvir.2023.1291078","url":null,"abstract":"Background: Virtual Reality (VR) does not only include the use of stereoscopic images, but also possibilities for an interaction with and participation in a computer-generated environment. However, laboratory studies primarily focus on the first part of the definition only. In this context, comparing results from different VR applications with diverging goals becomes difficult. This is especially true in the field of cybersickness research (visually induced motion sickness in VR), as self-report symptom questionnaires are used. The prominent Simulator Sickness Questionnaire (SSQ) is criticized for the lack of specificity, the double factorial loadings, the outdatedness, and the unrepresentative sample. VR-specific revisions like the Virtual Reality Sickness Questionnaire (VRSQ) address these criticisms but lack generalizability.Methods: The current paper uses a Confirmatory Factor Analysis of the VRSQ with data from three different VR environments and a sample size of N = 244. The environments had different setups, visual complexities, and interaction possibilities. These characteristics influenced the factorial structure of the VRSQ as a moderator. Furthermore, to control for VR-unrelated effects Baseline ratings were taken into account.Results: The Confirmatory Factor Analysis indicated a moderate fit for the global model, but a misspecification for two of the three environments. Only the environment similar to the original VRSQ paper converged with the model.Conclusions: In conclusion, a detailed description of the VR environment is required in scientific method reports. Focusing on VR accessibility for physically impaired in addition to healthy subjects, an added Baseline measurement can address the discriminant validity. Until generalizable VR-specific revisions of the SSQ are validated, the paper suggests using the Δ-SSQ in aggregated raw format.","PeriodicalId":73116,"journal":{"name":"Frontiers in virtual reality","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-12-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138596926","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Introduction: Augmented Reality (AR) systems are systems in which users view and interact with virtual objects overlaying the real world. AR systems are used across a variety of disciplines, i.e., games, medicine, and education to name a few. Optical See-Through (OST) AR displays allow users to perceive the real world directly by combining computer-generated imagery overlaying the real world. While perception of depth and visibility of objects is a widely studied field, we wanted to observe how color, luminance, and movement of an object interacted with each other as well as external luminance in OST AR devices. Little research has been done regarding the issues around the effect of virtual objects’ parameters on depth perception, external lighting, and the effect of an object’s mobility on this depth perception.Methods: We aim to perform an analysis of the effects of motion cues, color, and luminance on depth estimation of AR objects overlaying the real world with OST displays. We perform two experiments, differing in environmental lighting conditions (287 lux and 156 lux), and analyze the effects and differences on depth and speed perceptions.Results: We have found that while stationary objects follow previous research with regards to depth perception, motion and both object and environmental luminance play a factor in this perception.Discussion: These results will be significantly useful for developers to account for depth estimation issues that may arise in AR environments. Awareness of the different effects of speed and environmental illuminance on depth perception can be utilized when performing AR or MR applications where precision matters.
{"title":"Impact of motion cues, color, and luminance on depth perception in optical see-through AR displays","authors":"Omeed Ashtiani, Hung-Jui Guo, Balakrishnan Prabhakaran","doi":"10.3389/frvir.2023.1243956","DOIUrl":"https://doi.org/10.3389/frvir.2023.1243956","url":null,"abstract":"Introduction: Augmented Reality (AR) systems are systems in which users view and interact with virtual objects overlaying the real world. AR systems are used across a variety of disciplines, i.e., games, medicine, and education to name a few. Optical See-Through (OST) AR displays allow users to perceive the real world directly by combining computer-generated imagery overlaying the real world. While perception of depth and visibility of objects is a widely studied field, we wanted to observe how color, luminance, and movement of an object interacted with each other as well as external luminance in OST AR devices. Little research has been done regarding the issues around the effect of virtual objects’ parameters on depth perception, external lighting, and the effect of an object’s mobility on this depth perception.Methods: We aim to perform an analysis of the effects of motion cues, color, and luminance on depth estimation of AR objects overlaying the real world with OST displays. We perform two experiments, differing in environmental lighting conditions (287 lux and 156 lux), and analyze the effects and differences on depth and speed perceptions.Results: We have found that while stationary objects follow previous research with regards to depth perception, motion and both object and environmental luminance play a factor in this perception.Discussion: These results will be significantly useful for developers to account for depth estimation issues that may arise in AR environments. Awareness of the different effects of speed and environmental illuminance on depth perception can be utilized when performing AR or MR applications where precision matters.","PeriodicalId":73116,"journal":{"name":"Frontiers in virtual reality","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-12-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138595510","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-12-05DOI: 10.3389/frvir.2023.1215820
Leonardo Pavanatto, Shakiba Davari, Carmen Badea, Richard Stoakley, Doug A. Bowman
Virtual monitors can display information through a head-worn display when a physical monitor is unavailable or provides insufficient space. Low resolution and restricted field of view are common issues of these displays. Such issues reduce readability and peripheral vision, leading to increased head movement when we increase the display size. This work evaluates the performance and user experience of a virtual monitor setup that combines software designed to minimize graphical transformations and a high-resolution virtual reality head-worn display. Participants performed productivity work across three approaches: Workstation, which is often used at office locations and consists of three side-by-side physical monitors; Laptop, which is often used in mobile locations and consists of a single physical monitor expanded with multiple desktops; and Virtual, our prototype with three side-by-side virtual monitors. Results show that participants deemed Virtual faster, easier to use, and more intuitive than Laptop, evidencing the advantages of head and eye glances over full content switches. They also confirm the existence of a gap between Workstation and Virtual, as Workstation achieved the highest user experience. We conclude with design guidelines obtained from the lessons learned in this study.
{"title":"Virtual monitors vs. physical monitors: an empirical comparison for productivity work","authors":"Leonardo Pavanatto, Shakiba Davari, Carmen Badea, Richard Stoakley, Doug A. Bowman","doi":"10.3389/frvir.2023.1215820","DOIUrl":"https://doi.org/10.3389/frvir.2023.1215820","url":null,"abstract":"Virtual monitors can display information through a head-worn display when a physical monitor is unavailable or provides insufficient space. Low resolution and restricted field of view are common issues of these displays. Such issues reduce readability and peripheral vision, leading to increased head movement when we increase the display size. This work evaluates the performance and user experience of a virtual monitor setup that combines software designed to minimize graphical transformations and a high-resolution virtual reality head-worn display. Participants performed productivity work across three approaches: Workstation, which is often used at office locations and consists of three side-by-side physical monitors; Laptop, which is often used in mobile locations and consists of a single physical monitor expanded with multiple desktops; and Virtual, our prototype with three side-by-side virtual monitors. Results show that participants deemed Virtual faster, easier to use, and more intuitive than Laptop, evidencing the advantages of head and eye glances over full content switches. They also confirm the existence of a gap between Workstation and Virtual, as Workstation achieved the highest user experience. We conclude with design guidelines obtained from the lessons learned in this study.","PeriodicalId":73116,"journal":{"name":"Frontiers in virtual reality","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138599339","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-12-05DOI: 10.3389/frvir.2023.1141683
Martin Guy, Jean-Marie Normand, Camille Jeunet-Kelway, Guillaume Moreau
The sense of embodiment refers to the sensations of being inside, having, and controlling a body. In virtual reality, it is possible to substitute a person’s body with a virtual body, referred to as an avatar. Modulations of the sense of embodiment through modifications of this avatar have perceptual and behavioural consequences on users that can influence the way users interact with the virtual environment. Therefore, it is essential to define metrics that enable a reliable assessment of the sense of embodiment in virtual reality to better understand its dimensions, the way they interact, and their influence on the quality of interaction in the virtual environment. In this review, we first introduce the current knowledge on the sense of embodiment, its dimensions (senses of agency, body ownership, and self-location), and how they relate the ones with the others. Then, we dive into the different methods currently used to assess the sense of embodiment, ranging from questionnaires to neurophysiological measures. We provide a critical analysis of the existing metrics, discussing their advantages and drawbacks in the context of virtual reality. Notably, we argue that real-time measures of embodiment, which are also specific and do not require double tasking, are the most relevant in the context of virtual reality. Electroencephalography seems a good candidate for the future if its drawbacks (such as its sensitivity to movement and practicality) are improved. While the perfect metric has yet to be identified if it exists, this work provides clues on which metric to choose depending on the context, which should hopefully contribute to better assessing and understanding the sense of embodiment in virtual reality.
{"title":"The sense of embodiment in Virtual Reality and its assessment methods","authors":"Martin Guy, Jean-Marie Normand, Camille Jeunet-Kelway, Guillaume Moreau","doi":"10.3389/frvir.2023.1141683","DOIUrl":"https://doi.org/10.3389/frvir.2023.1141683","url":null,"abstract":"The sense of embodiment refers to the sensations of being inside, having, and controlling a body. In virtual reality, it is possible to substitute a person’s body with a virtual body, referred to as an avatar. Modulations of the sense of embodiment through modifications of this avatar have perceptual and behavioural consequences on users that can influence the way users interact with the virtual environment. Therefore, it is essential to define metrics that enable a reliable assessment of the sense of embodiment in virtual reality to better understand its dimensions, the way they interact, and their influence on the quality of interaction in the virtual environment. In this review, we first introduce the current knowledge on the sense of embodiment, its dimensions (senses of agency, body ownership, and self-location), and how they relate the ones with the others. Then, we dive into the different methods currently used to assess the sense of embodiment, ranging from questionnaires to neurophysiological measures. We provide a critical analysis of the existing metrics, discussing their advantages and drawbacks in the context of virtual reality. Notably, we argue that real-time measures of embodiment, which are also specific and do not require double tasking, are the most relevant in the context of virtual reality. Electroencephalography seems a good candidate for the future if its drawbacks (such as its sensitivity to movement and practicality) are improved. While the perfect metric has yet to be identified if it exists, this work provides clues on which metric to choose depending on the context, which should hopefully contribute to better assessing and understanding the sense of embodiment in virtual reality.","PeriodicalId":73116,"journal":{"name":"Frontiers in virtual reality","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138599749","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-12-04DOI: 10.3389/frvir.2023.1210211
Andre Paradise, S. Surve, Jovan C. Menezes, Madhav Gupta, Vaibhav Bisht, Kyung Rak Jang, Cong Liu, Suming Qiu, Junyi Dong, Jane Shin, Silvia Ferrari
Today’s research on human-robot teaming requires the ability to test artificial intelligence (AI) algorithms for perception and decision-making in complex real-world environments. Field experiments, also referred to as experiments “in the wild,” do not provide the level of detailed ground truth necessary for thorough performance comparisons and validation. Experiments on pre-recorded real-world data sets are also significantly limited in their usefulness because they do not allow researchers to test the effectiveness of active robot perception and control or decision strategies in the loop. Additionally, research on large human-robot teams requires tests and experiments that are too costly even for the industry and may result in considerable time losses when experiments go awry. The novel Real-Time Human Autonomous Systems Collaborations (RealTHASC) facility at Cornell University interfaces real and virtual robots and humans with photorealistic simulated environments by implementing new concepts for the seamless integration of wearable sensors, motion capture, physics-based simulations, robot hardware and virtual reality (VR). The result is an extended reality (XR) testbed by which real robots and humans in the laboratory are able to experience virtual worlds, inclusive of virtual agents, through real-time visual feedback and interaction. VR body tracking by DeepMotion is employed in conjunction with the OptiTrack motion capture system to transfer every human subject and robot in the real physical laboratory space into a synthetic virtual environment, thereby constructing corresponding human/robot avatars that not only mimic the behaviors of the real agents but also experience the virtual world through virtual sensors and transmit the sensor data back to the real human/robot agent, all in real time. New cross-domain synthetic environments are created in RealTHASC using Unreal Engine™, bridging the simulation-to-reality gap and allowing for the inclusion of underwater/ground/aerial autonomous vehicles, each equipped with a multi-modal sensor suite. The experimental capabilities offered by RealTHASC are demonstrated through three case studies showcasing mixed real/virtual human/robot interactions in diverse domains, leveraging and complementing the benefits of experimentation in simulation and in the real world.
{"title":"RealTHASC—a cyber-physical XR testbed for AI-supported real-time human autonomous systems collaborations","authors":"Andre Paradise, S. Surve, Jovan C. Menezes, Madhav Gupta, Vaibhav Bisht, Kyung Rak Jang, Cong Liu, Suming Qiu, Junyi Dong, Jane Shin, Silvia Ferrari","doi":"10.3389/frvir.2023.1210211","DOIUrl":"https://doi.org/10.3389/frvir.2023.1210211","url":null,"abstract":"Today’s research on human-robot teaming requires the ability to test artificial intelligence (AI) algorithms for perception and decision-making in complex real-world environments. Field experiments, also referred to as experiments “in the wild,” do not provide the level of detailed ground truth necessary for thorough performance comparisons and validation. Experiments on pre-recorded real-world data sets are also significantly limited in their usefulness because they do not allow researchers to test the effectiveness of active robot perception and control or decision strategies in the loop. Additionally, research on large human-robot teams requires tests and experiments that are too costly even for the industry and may result in considerable time losses when experiments go awry. The novel Real-Time Human Autonomous Systems Collaborations (RealTHASC) facility at Cornell University interfaces real and virtual robots and humans with photorealistic simulated environments by implementing new concepts for the seamless integration of wearable sensors, motion capture, physics-based simulations, robot hardware and virtual reality (VR). The result is an extended reality (XR) testbed by which real robots and humans in the laboratory are able to experience virtual worlds, inclusive of virtual agents, through real-time visual feedback and interaction. VR body tracking by DeepMotion is employed in conjunction with the OptiTrack motion capture system to transfer every human subject and robot in the real physical laboratory space into a synthetic virtual environment, thereby constructing corresponding human/robot avatars that not only mimic the behaviors of the real agents but also experience the virtual world through virtual sensors and transmit the sensor data back to the real human/robot agent, all in real time. New cross-domain synthetic environments are created in RealTHASC using Unreal Engine™, bridging the simulation-to-reality gap and allowing for the inclusion of underwater/ground/aerial autonomous vehicles, each equipped with a multi-modal sensor suite. The experimental capabilities offered by RealTHASC are demonstrated through three case studies showcasing mixed real/virtual human/robot interactions in diverse domains, leveraging and complementing the benefits of experimentation in simulation and in the real world.","PeriodicalId":73116,"journal":{"name":"Frontiers in virtual reality","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138602472","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-30DOI: 10.3389/frvir.2023.1291516
Yuki Harada, Makoto Wada
A head-mounted display could potentially restrict users’ visual fields and thereby impair their spatial cognitive ability. Spatial cognition can be assisted with immersive visual guidance. However, whether this technique is useful for individuals with autism-spectrum disorder (ASD) remains unclear. Given the recent virtual reality (VR) contents targeting individuals with ASD, the relationship between ASD-related traits and the effectiveness of immersive visual guidance should be clarified. This pilot study evaluated how ASD-related traits (autistic traits and empathizing–systemizing cognitive styles) among typically developing individuals are related to the effectiveness of visual guidance. Participants performed visual search and spatial localization tasks while using immersive visual guidance. In the visual search task, participants searched immersive VR environments for a target object and pushed a button according to the target color as quickly as possible. In the localization task, they viewed immersive visual guidance for a short duration and localized the guided direction via a controller. Results showed that visual search times were hastened with systemizing cognition. However, ASD-related traits were not significantly related to localization accuracy. These findings suggest that immersive visual guidance is generally useful for individuals with higher ASD-related traits.
{"title":"Autism-related traits are related to effectiveness of immersive visual guidance on spatial cognitive ability: a pilot study","authors":"Yuki Harada, Makoto Wada","doi":"10.3389/frvir.2023.1291516","DOIUrl":"https://doi.org/10.3389/frvir.2023.1291516","url":null,"abstract":"A head-mounted display could potentially restrict users’ visual fields and thereby impair their spatial cognitive ability. Spatial cognition can be assisted with immersive visual guidance. However, whether this technique is useful for individuals with autism-spectrum disorder (ASD) remains unclear. Given the recent virtual reality (VR) contents targeting individuals with ASD, the relationship between ASD-related traits and the effectiveness of immersive visual guidance should be clarified. This pilot study evaluated how ASD-related traits (autistic traits and empathizing–systemizing cognitive styles) among typically developing individuals are related to the effectiveness of visual guidance. Participants performed visual search and spatial localization tasks while using immersive visual guidance. In the visual search task, participants searched immersive VR environments for a target object and pushed a button according to the target color as quickly as possible. In the localization task, they viewed immersive visual guidance for a short duration and localized the guided direction via a controller. Results showed that visual search times were hastened with systemizing cognition. However, ASD-related traits were not significantly related to localization accuracy. These findings suggest that immersive visual guidance is generally useful for individuals with higher ASD-related traits.","PeriodicalId":73116,"journal":{"name":"Frontiers in virtual reality","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-11-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139204166","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-29DOI: 10.3389/frvir.2023.1261096
Morgan McGrath Lewis, Colin Waltz, Kathryn Scelina, Logan Scelina, Kelsey Owen, Karissa Hastilow, Mandy Miller Koop, A. Rosenfeldt, Jay L Alberts
Introduction: The successful performance of instrumental activities of daily living (IADLs) is critical in maintaining independence for older adults. Traditional IADL questionnaires and performance-based assessments are time consuming, potentially unreliable, and fail to adequately consider the interplay between cognitive and motor performance in completing IADLs. The Cleveland Clinic Virtual Reality Shopping (CC-VRS) platform was developed to objectively quantify IADL performance through the characterization of cognitive, motor, and cognitive-motor function. The CC-VRS combines an immersive virtual grocery store with an omnidirectional treadmill to create a scenario in which the user physically navigates through a virtual environment. The primary aim of this project was to determine the known-group validity of the CC-VRS platform to characterize IADL performance in healthy older adults and young adults.Methods: Twenty healthy young (n = 10) and older (n = 10) adults completed the Basic and Complex CC-VRS scenarios. Position data from VR trackers on the hands, waist, and feet were used to quantify motor performance. Cognitive and dual-task performance were automatically calculated by the application during specific shopping sub-tasks.Results: Older adults exhibited significantly worse performance on multiple cognitive, motor, and dual-task outcomes of the CC-VRS (e. g., average walking speed, number of list activations, and stopping frequency).Discussion: The CC-VRS successfully discriminated IADL performance between young and healthy older adults. The complex realistic environment of the CC-VRS, combined with simultaneous evaluation of motor and cognitive performance, has the potential to more accurately characterize IADL performance by identifying subtle functional deficits that may precede neurological disease.
{"title":"Older adults exhibit declines in instrumental activities of daily living during a virtual grocery shopping task","authors":"Morgan McGrath Lewis, Colin Waltz, Kathryn Scelina, Logan Scelina, Kelsey Owen, Karissa Hastilow, Mandy Miller Koop, A. Rosenfeldt, Jay L Alberts","doi":"10.3389/frvir.2023.1261096","DOIUrl":"https://doi.org/10.3389/frvir.2023.1261096","url":null,"abstract":"Introduction: The successful performance of instrumental activities of daily living (IADLs) is critical in maintaining independence for older adults. Traditional IADL questionnaires and performance-based assessments are time consuming, potentially unreliable, and fail to adequately consider the interplay between cognitive and motor performance in completing IADLs. The Cleveland Clinic Virtual Reality Shopping (CC-VRS) platform was developed to objectively quantify IADL performance through the characterization of cognitive, motor, and cognitive-motor function. The CC-VRS combines an immersive virtual grocery store with an omnidirectional treadmill to create a scenario in which the user physically navigates through a virtual environment. The primary aim of this project was to determine the known-group validity of the CC-VRS platform to characterize IADL performance in healthy older adults and young adults.Methods: Twenty healthy young (n = 10) and older (n = 10) adults completed the Basic and Complex CC-VRS scenarios. Position data from VR trackers on the hands, waist, and feet were used to quantify motor performance. Cognitive and dual-task performance were automatically calculated by the application during specific shopping sub-tasks.Results: Older adults exhibited significantly worse performance on multiple cognitive, motor, and dual-task outcomes of the CC-VRS (e. g., average walking speed, number of list activations, and stopping frequency).Discussion: The CC-VRS successfully discriminated IADL performance between young and healthy older adults. The complex realistic environment of the CC-VRS, combined with simultaneous evaluation of motor and cognitive performance, has the potential to more accurately characterize IADL performance by identifying subtle functional deficits that may precede neurological disease.","PeriodicalId":73116,"journal":{"name":"Frontiers in virtual reality","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139212059","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-28DOI: 10.3389/frvir.2023.1242587
R. K. Venkatesan, Domna Banakou, Mel Slater, Manivannan M.
Research has shown that incorporating haptics into virtual environments can increase sensory fidelity and provide powerful and immersive experiences. However, current studies on haptics in virtual interactions primarily focus on one-on-one scenarios, while kinesthetic haptic interactions in large virtual gatherings are underexplored. This study aims to investigate the impact of kinesthetic haptics on eliciting emotional responses within crowded virtual reality (VR) scenarios. Specifically, we examine the influence of type or quality of the haptic feedback on the perception of positive and negative emotions. We designed and developed different combinations of tactile and torque feedback devices and evaluated their effects on emotional responses. To achieve this, we explored different combinations of haptic feedback devices, including “No Haptic,” “Tactile Stimulus” delivering tactile cues, and “Haptic Stimulus” delivering tactile and torque cues, in combination with two immersive 360-degree video crowd scenarios, namely, “Casual Crowd” and “Aggressive Crowd.” The results suggest that varying the type or quality of haptic feedback can evoke different emotional responses in crowded VR scenarios. Participants reported increased levels of nervousness with Haptic Stimulus in both virtual scenarios, while both Tactile Stimulus and Haptic Stimulus were negatively associated with pleasantness and comfort during the interaction. Additionally, we observed that participants’ sense of touch being real was enhanced in Haptic Stimulus compared to Tactile Stimulus. The “Haptic Stimulus” condition had the most positive influence on participants’ sense of identification with the crowd.
{"title":"Haptic feedback in a virtual crowd scenario improves the emotional response","authors":"R. K. Venkatesan, Domna Banakou, Mel Slater, Manivannan M.","doi":"10.3389/frvir.2023.1242587","DOIUrl":"https://doi.org/10.3389/frvir.2023.1242587","url":null,"abstract":"Research has shown that incorporating haptics into virtual environments can increase sensory fidelity and provide powerful and immersive experiences. However, current studies on haptics in virtual interactions primarily focus on one-on-one scenarios, while kinesthetic haptic interactions in large virtual gatherings are underexplored. This study aims to investigate the impact of kinesthetic haptics on eliciting emotional responses within crowded virtual reality (VR) scenarios. Specifically, we examine the influence of type or quality of the haptic feedback on the perception of positive and negative emotions. We designed and developed different combinations of tactile and torque feedback devices and evaluated their effects on emotional responses. To achieve this, we explored different combinations of haptic feedback devices, including “No Haptic,” “Tactile Stimulus” delivering tactile cues, and “Haptic Stimulus” delivering tactile and torque cues, in combination with two immersive 360-degree video crowd scenarios, namely, “Casual Crowd” and “Aggressive Crowd.” The results suggest that varying the type or quality of haptic feedback can evoke different emotional responses in crowded VR scenarios. Participants reported increased levels of nervousness with Haptic Stimulus in both virtual scenarios, while both Tactile Stimulus and Haptic Stimulus were negatively associated with pleasantness and comfort during the interaction. Additionally, we observed that participants’ sense of touch being real was enhanced in Haptic Stimulus compared to Tactile Stimulus. The “Haptic Stimulus” condition had the most positive influence on participants’ sense of identification with the crowd.","PeriodicalId":73116,"journal":{"name":"Frontiers in virtual reality","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139227163","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-27DOI: 10.3389/frvir.2023.1307925
A. Maneuvrier, Ngoc-Doan-Trang Nguyen, Patrice Renaud
Introduction: This exploratory study aims to participate in the development of the VR framework by focusing on the issue of cybersickness. The main objective is to explore the possibilities of predicting cybersickness using i) field dependence-independence measures and ii) head rotations data through automatic analyses. The second objective is to assess the impact of cybersickness on visuomotor performance.Methods: 40 participants completed a 13.5-min VR immersion in a first-person shooter game. Head rotations were analyzed in both their spatial (coefficients of variations) and temporal dimensions (detrended fluctuations analyses). Exploratory correlations, linear regressions and clusters comparison (unsupervised machine learning) analyses were performed to explain cybersickness and visuomotor performance. Traditional VR human factors (sense of presence, state of flow, video game experience, age) were also integrated.Results: Results suggest that field dependence-independence measured before exposure to VR explain ¼ of the variance of cybersickness, while the Disorientation scale of the Simulator Sickness Questionnaire predicts 16.3% of the visuomotor performance. In addition, automatic analyses of head rotations during immersion revealed two different clusters of participants, one of them reporting more cybersickness than the other.Discussion: These results are discussed in terms of sensory integration and a diminution of head rotations as an avoidance behavior of negative symptoms. This study suggests that measuring field dependence-independence using the (Virtual) Rod and Frame Test before immersion and tracking head rotations using internal sensors during immersion might serve as powerful tools for VR actors.
{"title":"Predicting VR cybersickness and its impact on visuomotor performance using head rotations and field (in)dependence","authors":"A. Maneuvrier, Ngoc-Doan-Trang Nguyen, Patrice Renaud","doi":"10.3389/frvir.2023.1307925","DOIUrl":"https://doi.org/10.3389/frvir.2023.1307925","url":null,"abstract":"Introduction: This exploratory study aims to participate in the development of the VR framework by focusing on the issue of cybersickness. The main objective is to explore the possibilities of predicting cybersickness using i) field dependence-independence measures and ii) head rotations data through automatic analyses. The second objective is to assess the impact of cybersickness on visuomotor performance.Methods: 40 participants completed a 13.5-min VR immersion in a first-person shooter game. Head rotations were analyzed in both their spatial (coefficients of variations) and temporal dimensions (detrended fluctuations analyses). Exploratory correlations, linear regressions and clusters comparison (unsupervised machine learning) analyses were performed to explain cybersickness and visuomotor performance. Traditional VR human factors (sense of presence, state of flow, video game experience, age) were also integrated.Results: Results suggest that field dependence-independence measured before exposure to VR explain ¼ of the variance of cybersickness, while the Disorientation scale of the Simulator Sickness Questionnaire predicts 16.3% of the visuomotor performance. In addition, automatic analyses of head rotations during immersion revealed two different clusters of participants, one of them reporting more cybersickness than the other.Discussion: These results are discussed in terms of sensory integration and a diminution of head rotations as an avoidance behavior of negative symptoms. This study suggests that measuring field dependence-independence using the (Virtual) Rod and Frame Test before immersion and tracking head rotations using internal sensors during immersion might serve as powerful tools for VR actors.","PeriodicalId":73116,"journal":{"name":"Frontiers in virtual reality","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-11-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139233640","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-24DOI: 10.3389/frvir.2023.1260313
Nancy A. Baker, A. Polhemus, Megan Kenney, Rina Bloch, Nathan Ward, James Intriligator, Robert Edwards
Immersive virtual reality (IVR) is increasingly used as a treatment for chronic pain. In this crossover randomized pilot study, we examined the effect of 10- and 20-min dosages on back pain intensity, affect, and measures of pain sensitization in people with chronic back pain (CBP). Twenty-one people with CBP were seen for two visits of IVR. Participants were randomly assigned to receive either 10- or 20-min of IVR in Visit 1 and the other dosage in Visit 2. Our primary analyses were effect sizes and simple inferential comparisons for pain intensity, affect, fatigue, and measures of pain sensitization assessed using quantitative sensory testing. Overall, IVR had a moderate, significant effect in reducing back pain intensity, negative affect, and painful aftersensations. When dosage was examined, 20-min had a moderate, significant effect on pain while 10-min had a small, non-significant effect, although the between-dosage difference was non-significant. Interestingly, effects were much larger in Visit 1, particularly for 20-min, but this diminished in Visit 2, and both dosages had a smaller effect in Visit 2. We interpret these results to indicate that pain modulation may be associated with novelty and engagement that can attenuate over time if the IVR encounter is not sufficiently engaging. Moreover, that if participants are engaged in a single session, 20-min may be necessary to obtain sufficient competency with IVR, while in subsequent sessions, 10-min of IVR may be sufficient to affect pain.
{"title":"Examining the difference between 10- and 20-min of immersive virtual reality on symptoms, affect, and central sensitization in people with chronic back pain","authors":"Nancy A. Baker, A. Polhemus, Megan Kenney, Rina Bloch, Nathan Ward, James Intriligator, Robert Edwards","doi":"10.3389/frvir.2023.1260313","DOIUrl":"https://doi.org/10.3389/frvir.2023.1260313","url":null,"abstract":"Immersive virtual reality (IVR) is increasingly used as a treatment for chronic pain. In this crossover randomized pilot study, we examined the effect of 10- and 20-min dosages on back pain intensity, affect, and measures of pain sensitization in people with chronic back pain (CBP). Twenty-one people with CBP were seen for two visits of IVR. Participants were randomly assigned to receive either 10- or 20-min of IVR in Visit 1 and the other dosage in Visit 2. Our primary analyses were effect sizes and simple inferential comparisons for pain intensity, affect, fatigue, and measures of pain sensitization assessed using quantitative sensory testing. Overall, IVR had a moderate, significant effect in reducing back pain intensity, negative affect, and painful aftersensations. When dosage was examined, 20-min had a moderate, significant effect on pain while 10-min had a small, non-significant effect, although the between-dosage difference was non-significant. Interestingly, effects were much larger in Visit 1, particularly for 20-min, but this diminished in Visit 2, and both dosages had a smaller effect in Visit 2. We interpret these results to indicate that pain modulation may be associated with novelty and engagement that can attenuate over time if the IVR encounter is not sufficiently engaging. Moreover, that if participants are engaged in a single session, 20-min may be necessary to obtain sufficient competency with IVR, while in subsequent sessions, 10-min of IVR may be sufficient to affect pain.","PeriodicalId":73116,"journal":{"name":"Frontiers in virtual reality","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-11-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139241954","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}