Pub Date : 2026-01-01Epub Date: 2025-11-27DOI: 10.1007/s10055-025-01189-y
Filip Škola, Fotis Liarokapis
Out-of-reach interaction in virtual reality has primarily relied on raycasting (selection using the laser pointer metaphor). However, as bare-hand tracking becomes increasingly prevalent, there is a growing need to explore and optimize hand-based out-of-reach interaction techniques. To address this, we introduce Hand Gliding and Laser Gliding, novel out-of-reach interaction techniques that use velocity-to-velocity mapping to control virtual hands through physical movements, and implement Go-Go and HOMER, position-to-position methods. First, a pilot study evaluated the feasibility of Hand Gliding. Next, we conducted a within-subject comparison of the four interaction techniques using selection and translation tasks while assessing speed, comfort, and subjective responses. The best results were achieved with both raycasting-aided techniques (HOMER, Laser Gliding) in terms of both performance and user comfort. Position-to-position mapping performed slightly better in tasks requiring rapid selection, while velocity-to-velocity techniques facilitated interaction at greater distances. The feasibility of velocity-to-velocity approaches to out-of-reach interaction was confirmed by this study. Due to their simple implementation (compared to position-based techniques, they do not require torso tracking data), velocity-based interaction methods have the potential for wide adoption in current VR systems.
{"title":"Reaching further in VR: a comparative study with a novel velocity-based technique.","authors":"Filip Škola, Fotis Liarokapis","doi":"10.1007/s10055-025-01189-y","DOIUrl":"10.1007/s10055-025-01189-y","url":null,"abstract":"<p><p>Out-of-reach interaction in virtual reality has primarily relied on raycasting (selection using the laser pointer metaphor). However, as bare-hand tracking becomes increasingly prevalent, there is a growing need to explore and optimize hand-based out-of-reach interaction techniques. To address this, we introduce Hand Gliding and Laser Gliding, novel out-of-reach interaction techniques that use velocity-to-velocity mapping to control virtual hands through physical movements, and implement Go-Go and HOMER, position-to-position methods. First, a pilot study evaluated the feasibility of Hand Gliding. Next, we conducted a within-subject comparison of the four interaction techniques using selection and translation tasks while assessing speed, comfort, and subjective responses. The best results were achieved with both raycasting-aided techniques (HOMER, Laser Gliding) in terms of both performance and user comfort. Position-to-position mapping performed slightly better in tasks requiring rapid selection, while velocity-to-velocity techniques facilitated interaction at greater distances. The feasibility of velocity-to-velocity approaches to out-of-reach interaction was confirmed by this study. Due to their simple implementation (compared to position-based techniques, they do not require torso tracking data), velocity-based interaction methods have the potential for wide adoption in current VR systems.</p>","PeriodicalId":23727,"journal":{"name":"Virtual Reality","volume":"30 1","pages":"1"},"PeriodicalIF":5.0,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12660419/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145649495","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-01Epub Date: 2025-12-31DOI: 10.1007/s10055-025-01297-9
Michal Gabay, Tom Schonberg
The widespread adoption of head-mounted display (HMD) virtual reality (VR) systems has emerged in various fields, including spatial learning research. This study investigated the effects of VR modality level of immersion, locomotion interface, and proprioception on spatial learning and physiological measures using eye-tracking (ET) in VR. We translated the classic T-maze task from Barnes et al. (Can J Psychol 34:29-39, 1980. https://doi.org/10.1037/h0081022) to humans for the first time, comparing three VR modalities: 3D HMD VR with physical walking, 3D HMD VR with controller-based movement, and 2D desktop VR. Eighty-eight participants were recruited, and recruitment was stopped once 75 valid participants were obtained (25 per experimental condition) in the powered sample, as preregistered. Results of generalized mixed linear models with binomial distribution to assess learning outcomes revealed that human participants employed a mixture of cue, place, and response strategies when navigating the virtual T-maze, mirroring rodent behavior. In both samples, mixed linear models showed no significant differences between the two HMD VR conditions in learning performance, nor consistent ones in strategy choices. However, 2D desktop navigation was associated with slower initial learning, though this discrepancy diminished in subsequent sessions. These results were supported by spatial presence, immersion, and naturalness reports. Gaze measures showed that participants who physically walked devoted more visual attention to environmental cues compared to controller users. Predictive models for identifying spatial learning strategies based on ET and behavioral measures demonstrated significant accuracy in permutation tests in some models, particularly in the VR walking condition and second session. Our findings enhance the understanding of spatial learning strategies and the effects of VR modality on cognition and gaze behavior. This work demonstrates the potential of integrated ET data and holds implications for early detection and personalized rehabilitation of neurodegenerative conditions related to spatial cognition.
Supplementary information: The online version contains supplementary material available at 10.1007/s10055-025-01297-9.
{"title":"The effect of virtual reality modality level of immersion and locomotion on Spatial learning and gaze measures.","authors":"Michal Gabay, Tom Schonberg","doi":"10.1007/s10055-025-01297-9","DOIUrl":"10.1007/s10055-025-01297-9","url":null,"abstract":"<p><p>The widespread adoption of head-mounted display (HMD) virtual reality (VR) systems has emerged in various fields, including spatial learning research. This study investigated the effects of VR modality level of immersion, locomotion interface, and proprioception on spatial learning and physiological measures using eye-tracking (ET) in VR. We translated the classic T-maze task from Barnes et al. (Can J Psychol 34:29-39, 1980. https://doi.org/10.1037/h0081022) to humans for the first time, comparing three VR modalities: 3D HMD VR with physical walking, 3D HMD VR with controller-based movement, and 2D desktop VR. Eighty-eight participants were recruited, and recruitment was stopped once 75 valid participants were obtained (25 per experimental condition) in the powered sample, as preregistered. Results of generalized mixed linear models with binomial distribution to assess learning outcomes revealed that human participants employed a mixture of cue, place, and response strategies when navigating the virtual T-maze, mirroring rodent behavior. In both samples, mixed linear models showed no significant differences between the two HMD VR conditions in learning performance, nor consistent ones in strategy choices. However, 2D desktop navigation was associated with slower initial learning, though this discrepancy diminished in subsequent sessions. These results were supported by spatial presence, immersion, and naturalness reports. Gaze measures showed that participants who physically walked devoted more visual attention to environmental cues compared to controller users. Predictive models for identifying spatial learning strategies based on ET and behavioral measures demonstrated significant accuracy in permutation tests in some models, particularly in the VR walking condition and second session. Our findings enhance the understanding of spatial learning strategies and the effects of VR modality on cognition and gaze behavior. This work demonstrates the potential of integrated ET data and holds implications for early detection and personalized rehabilitation of neurodegenerative conditions related to spatial cognition.</p><p><strong>Supplementary information: </strong>The online version contains supplementary material available at 10.1007/s10055-025-01297-9.</p>","PeriodicalId":23727,"journal":{"name":"Virtual Reality","volume":"30 1","pages":"36"},"PeriodicalIF":5.0,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12819571/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146031062","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-01Epub Date: 2025-12-25DOI: 10.1007/s10055-025-01284-0
Pascal Spiegler, Arash Harirpoush, Yiming Xiao
Crucial in disease analysis and surgical planning, manual segmentation of volumetric medical scans (e.g. MRI, CT) is laborious, error-prone, and challenging to master, while fully automatic algorithms can benefit from user feedback. Therefore, with the complementary power of the latest radiological AI foundation models and virtual reality (VR)'s intuitive data interaction, we propose SAMIRA, a novel conversational AI agent for medical VR that assists users with localizing, segmenting, and visualizing 3D medical concepts. Through speech-based interaction, the agent helps users understand radiological features, locate clinical targets, and generate segmentation masks that can be refined with just a few point prompts. The system also supports true-to-scale 3D visualization of segmented pathology to enhance patient-specific anatomical understanding. Furthermore, to determine the optimal interaction paradigm under near-far attention-switching for refining segmentation masks in an immersive, human-in-the-loop workflow, we compare VR controller pointing, head pointing, and eye tracking as input modes. With a user study, evaluations demonstrated a high usability score (SUS = 90.0 ± 9.0), low overall task load, as well as strong support for the proposed VR system's guidance, training potential, and integration of AI in radiological segmentation tasks.
{"title":"Towards user-centered interactive medical image segmentation in VR with an assistive AI agent.","authors":"Pascal Spiegler, Arash Harirpoush, Yiming Xiao","doi":"10.1007/s10055-025-01284-0","DOIUrl":"10.1007/s10055-025-01284-0","url":null,"abstract":"<p><p>Crucial in disease analysis and surgical planning, manual segmentation of volumetric medical scans (e.g. MRI, CT) is laborious, error-prone, and challenging to master, while fully automatic algorithms can benefit from user feedback. Therefore, with the complementary power of the latest radiological AI foundation models and virtual reality (VR)'s intuitive data interaction, we propose SAMIRA, a novel conversational AI agent for medical VR that assists users with localizing, segmenting, and visualizing 3D medical concepts. Through speech-based interaction, the agent helps users understand radiological features, locate clinical targets, and generate segmentation masks that can be refined with just a few point prompts. The system also supports true-to-scale 3D visualization of segmented pathology to enhance patient-specific anatomical understanding. Furthermore, to determine the optimal interaction paradigm under near-far attention-switching for refining segmentation masks in an immersive, human-in-the-loop workflow, we compare VR controller pointing, head pointing, and eye tracking as input modes. With a user study, evaluations demonstrated a high usability score (SUS = 90.0 ± 9.0), low overall task load, as well as strong support for the proposed VR system's guidance, training potential, and integration of AI in radiological segmentation tasks.</p>","PeriodicalId":23727,"journal":{"name":"Virtual Reality","volume":"30 1","pages":"20"},"PeriodicalIF":5.0,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12775086/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145935348","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-01Epub Date: 2025-12-27DOI: 10.1007/s10055-025-01241-x
Jennifer Todd, Kirk Woolford, Lee Cheng, Michael C Lee, Elsje de Villiers, Deanna Finn, Jane E Aspell
Background: Fibromyalgia is a chronic condition characterised by widespread pain, as well as sleep disturbances, fatigue, and memory and concentration difficulties. Research suggests that an alteration in how the brain represents multisensory inputs from the body may cause or maintain chronic pain conditions, including fibromyalgia. Extended reality (XR) and virtual reality setups generating multisensory conflicts have been shown to alleviate pain, however, the optimal duration for such interventions remains unexplored. Here, we aimed to determine an optimal duration for the cardio-visual full body illusion (FBI) in fibromyalgia, considering both tolerability and changes in pain.
Methods: Participants wore headsets to view a video of their own body, filmed from behind, and their virtual body flashed in synchrony with their heartbeat. We used an established dose-finding protocol to determine the ideal duration (balancing benefit and tolerability). Seven cohorts of participants (N = 20) were exposed to different durations of the FBI, with adjustments to duration made according to predefined criteria. Measures included a numeric rating scale for pain intensity, pressure pain thresholds, and scales measuring fibromyalgia symptom severity and impact.
Results: We found a quadratic relationship between session duration and changes in self-reported pain-intensity, with 8-16-min durations yielding the most significant improvements. Notably, in the 12-min cohorts pain relief was sustained at 24-h follow-up, and this is the recommended duration for future research.
Conclusions: These findings represent a key step towards developing an effective non-pharmacological intervention for fibromyalgia. Future dose-optimisation research should explore the optimum number of sessions and spacing between sessions.
{"title":"XR body illusion for managing pain in fibromyalgia: examining optimal duration.","authors":"Jennifer Todd, Kirk Woolford, Lee Cheng, Michael C Lee, Elsje de Villiers, Deanna Finn, Jane E Aspell","doi":"10.1007/s10055-025-01241-x","DOIUrl":"https://doi.org/10.1007/s10055-025-01241-x","url":null,"abstract":"<p><strong>Background: </strong>Fibromyalgia is a chronic condition characterised by widespread pain, as well as sleep disturbances, fatigue, and memory and concentration difficulties. Research suggests that an alteration in how the brain represents multisensory inputs from the body may cause or maintain chronic pain conditions, including fibromyalgia. Extended reality (XR) and virtual reality setups generating multisensory conflicts have been shown to alleviate pain, however, the optimal duration for such interventions remains unexplored. Here, we aimed to determine an optimal duration for the cardio-visual full body illusion (FBI) in fibromyalgia, considering both tolerability and changes in pain.</p><p><strong>Methods: </strong>Participants wore headsets to view a video of their own body, filmed from behind, and their virtual body flashed in synchrony with their heartbeat. We used an established dose-finding protocol to determine the ideal duration (balancing benefit and tolerability). Seven cohorts of participants (<i>N</i> = 20) were exposed to different durations of the FBI, with adjustments to duration made according to predefined criteria. Measures included a numeric rating scale for pain intensity, pressure pain thresholds, and scales measuring fibromyalgia symptom severity and impact.</p><p><strong>Results: </strong>We found a quadratic relationship between session duration and changes in self-reported pain-intensity, with 8-16-min durations yielding the most significant improvements. Notably, in the 12-min cohorts pain relief was sustained at 24-h follow-up, and this is the recommended duration for future research.</p><p><strong>Conclusions: </strong>These findings represent a key step towards developing an effective non-pharmacological intervention for fibromyalgia. Future dose-optimisation research should explore the optimum number of sessions and spacing between sessions.</p>","PeriodicalId":23727,"journal":{"name":"Virtual Reality","volume":"30 1","pages":"54"},"PeriodicalIF":5.0,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12886212/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146166971","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-01Epub Date: 2026-01-17DOI: 10.1007/s10055-025-01300-3
Reza Aghanejad, Amit Birenboim, Mario Matthys, Sebastien Claramunt, Camille Perchoux
Introduction: Stress is a growing public health concern, with urban environments strongly influencing stress levels. Experimental approaches simulating human motion (i.e., walking) in virtual urban environments may constitute a promising avenue to assess environmental effects on stress. This study aims to systematically examine the ability to induce and measure psychological and physiological stress responses during walking in immersive virtual urban environments. It includes two sub-experiments: one comparing average stress levels between an urban park and a street, and one assessing momentary stress responses to a siren stimulus.
Methods: Fifty adults residing in Luxembourg experienced virtual walks through a park and a street, in a randomized crossover design, as well as a walk through the street with a siren stimulus. Physiological responses (electrodermal activity (EDA), heart rate (HR), and pupil diameter) were recorded continuously, while self-reported stress and emotional perceptions were assessed after each session via in-VR questionnaires.
Results: Participants reported significantly lower self-reported stress and more positive emotional engagement in the park compared to the street. The standard deviation of HR was higher in the street, possibly indicating higher stress levels, while average pupil diameter was larger in the park, reflecting heightened emotional arousal. Other biomarkers showed no significant differences. Pupil diameter and EDA effectively detected momentary stress after the siren stimulus.
Conclusions: Combining virtual environments with a walking simulator and biosensors provides a novel, effective method to study environmental impacts on stress. Findings contribute to identifying stress-inducing urban environments and developing stress reduction interventions.
Supplementary information: The online version contains supplementary material available at 10.1007/s10055-025-01300-3.
{"title":"Stressful urban walks: an experimental design for measuring physiological and psychological stress in virtual urban environments.","authors":"Reza Aghanejad, Amit Birenboim, Mario Matthys, Sebastien Claramunt, Camille Perchoux","doi":"10.1007/s10055-025-01300-3","DOIUrl":"10.1007/s10055-025-01300-3","url":null,"abstract":"<p><strong>Introduction: </strong>Stress is a growing public health concern, with urban environments strongly influencing stress levels. Experimental approaches simulating human motion (i.e., walking) in virtual urban environments may constitute a promising avenue to assess environmental effects on stress. This study aims to systematically examine the ability to induce and measure psychological and physiological stress responses during walking in immersive virtual urban environments. It includes two sub-experiments: one comparing average stress levels between an urban park and a street, and one assessing momentary stress responses to a siren stimulus.</p><p><strong>Methods: </strong>Fifty adults residing in Luxembourg experienced virtual walks through a park and a street, in a randomized crossover design, as well as a walk through the street with a siren stimulus. Physiological responses (electrodermal activity (EDA), heart rate (HR), and pupil diameter) were recorded continuously, while self-reported stress and emotional perceptions were assessed after each session via in-VR questionnaires.</p><p><strong>Results: </strong>Participants reported significantly lower self-reported stress and more positive emotional engagement in the park compared to the street. The standard deviation of HR was higher in the street, possibly indicating higher stress levels, while average pupil diameter was larger in the park, reflecting heightened emotional arousal. Other biomarkers showed no significant differences. Pupil diameter and EDA effectively detected momentary stress after the siren stimulus.</p><p><strong>Conclusions: </strong>Combining virtual environments with a walking simulator and biosensors provides a novel, effective method to study environmental impacts on stress. Findings contribute to identifying stress-inducing urban environments and developing stress reduction interventions.</p><p><strong>Supplementary information: </strong>The online version contains supplementary material available at 10.1007/s10055-025-01300-3.</p>","PeriodicalId":23727,"journal":{"name":"Virtual Reality","volume":"30 1","pages":"45"},"PeriodicalIF":5.0,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12855307/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146107503","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-01Epub Date: 2026-01-07DOI: 10.1007/s10055-025-01164-7
Guillaume Spalla, Charles Gouin-Vallerand, Nathalie Bier
Older adults may be subjected to neurocognitive disorders. One of the first cognitive domains to be affected is executive functions. Executive functions are defined as the capacity to plan and carry out complex goal-directed behavior. Impairment in executive functions can have an impact on the ability of the individual to continue to live at home independently. Assistive technology may alleviate these impairments. Mixed reality can offer theoretical advantages in this context. Mixed reality blends virtual elements into one's perception of the real world. Develop and evaluate the utility, usability and cognitive load of MATCH, a mixed reality assistive technology for cognition. MATCH has been developed to support executive function impairments while following the recommendations of human-computer interaction in mixed reality and assistive technology for cognition. An evaluation was carried out with 12 older adults without and with neurocognitive disorders. A quantitative analysis was conducted using performance and self-reported metrics. A deductive thematic content qualitative analysis was done based on spontaneous comments made by participants. MATCH could autonomously guide participants, providing necessary and sufficient assistance with only a low cognitive load. It did not have any negative impact. Themes emerged related to the positive and negative aspects of utility, usability and social significance. A number of potential avenues for further research were identified, including the possibility of exploring alternative methods of providing assistance based on the user's previous interactions with the system.
{"title":"Mixed reality assistive technology to support independence at home of older adults with neurocognitive disorders: development and evaluation of utility and usability.","authors":"Guillaume Spalla, Charles Gouin-Vallerand, Nathalie Bier","doi":"10.1007/s10055-025-01164-7","DOIUrl":"10.1007/s10055-025-01164-7","url":null,"abstract":"<p><p>Older adults may be subjected to neurocognitive disorders. One of the first cognitive domains to be affected is executive functions. Executive functions are defined as the capacity to plan and carry out complex goal-directed behavior. Impairment in executive functions can have an impact on the ability of the individual to continue to live at home independently. Assistive technology may alleviate these impairments. Mixed reality can offer theoretical advantages in this context. Mixed reality blends virtual elements into one's perception of the real world. Develop and evaluate the utility, usability and cognitive load of MATCH, a mixed reality assistive technology for cognition. MATCH has been developed to support executive function impairments while following the recommendations of human-computer interaction in mixed reality and assistive technology for cognition. An evaluation was carried out with 12 older adults without and with neurocognitive disorders. A quantitative analysis was conducted using performance and self-reported metrics. A deductive thematic content qualitative analysis was done based on spontaneous comments made by participants. MATCH could autonomously guide participants, providing necessary and sufficient assistance with only a low cognitive load. It did not have any negative impact. Themes emerged related to the positive and negative aspects of utility, usability and social significance. A number of potential avenues for further research were identified, including the possibility of exploring alternative methods of providing assistance based on the user's previous interactions with the system.</p>","PeriodicalId":23727,"journal":{"name":"Virtual Reality","volume":"30 1","pages":"23"},"PeriodicalIF":5.0,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12779662/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145953102","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-01Epub Date: 2026-01-19DOI: 10.1007/s10055-025-01307-w
T Arthur, D Borg, Y Wang, D Harris, S Vine, G Buckingham, M Wilson, M Brosnan
Recent advancements in extended reality (XR) and data modelling present new opportunities for adaptive simulation solutions, which can measure and respond to individual neuropsychological states. However, questions remain about the optimal metrics for real-time data capture and the applicability of these solutions for enhancing user experiences. The present research examined a novel form of adaptive XR, called "prediction-based attention computing" (PbAC), which tailors simulations based on computational models of the brain and, thus, the dynamic sensorimotor processes theorised to underpin human perception and learning. Specifically, this study aimed to demonstrate whether PbAC can adaptively capture users' internal state predictions and modulate associated neuropsychological responses. To test this, we used an XR-based racquetball paradigm, in which participants were tasked with intercepting virtual balls that emerged from different starting locations. For PbAC conditions, in-situ eye tracking data assessments were utilised to index participant's prior beliefs and manipulate levels of expectedness (i.e., prediction error) on each trial. Various measures of predictive sensorimotor behaviour were then extracted and compared with data from probability-controlled and matched-order control conditions. Results showed that sensorimotor responses were affected by the expectedness of XR stimuli, and that clear, prediction-related biases emerged within PbAC conditions. The novel computing software also provoked marked surprisal responses on trials designed to elicit high levels of prediction error, and these surprisal effects were similar, or even greater than, those in our comparison conditions. Together, the findings provide proof of concept for PbAC and support its development within future research and technology innovations.
Supplementary information: The online version contains supplementary material available at 10.1007/s10055-025-01307-w.
{"title":"Prediction-based attention computing: a proof of concept study.","authors":"T Arthur, D Borg, Y Wang, D Harris, S Vine, G Buckingham, M Wilson, M Brosnan","doi":"10.1007/s10055-025-01307-w","DOIUrl":"10.1007/s10055-025-01307-w","url":null,"abstract":"<p><p>Recent advancements in extended reality (XR) and data modelling present new opportunities for adaptive simulation solutions, which can measure and respond to individual neuropsychological states. However, questions remain about the optimal metrics for real-time data capture and the applicability of these solutions for enhancing user experiences. The present research examined a novel form of adaptive XR, called \"prediction-based attention computing\" (PbAC), which tailors simulations based on computational models of the brain and, thus, the dynamic sensorimotor processes theorised to underpin human perception and learning. Specifically, this study aimed to demonstrate whether PbAC can adaptively capture users' internal state predictions and modulate associated neuropsychological responses. To test this, we used an XR-based racquetball paradigm, in which participants were tasked with intercepting virtual balls that emerged from different starting locations. For PbAC conditions, in-situ eye tracking data assessments were utilised to index participant's prior beliefs and manipulate levels of expectedness (i.e., prediction error) on each trial. Various measures of predictive sensorimotor behaviour were then extracted and compared with data from probability-controlled and matched-order control conditions. Results showed that sensorimotor responses were affected by the expectedness of XR stimuli, and that clear, prediction-related biases emerged within PbAC conditions. The novel computing software also provoked marked surprisal responses on trials designed to elicit high levels of prediction error, and these surprisal effects were similar, or even greater than, those in our comparison conditions. Together, the findings provide proof of concept for PbAC and support its development within future research and technology innovations.</p><p><strong>Supplementary information: </strong>The online version contains supplementary material available at 10.1007/s10055-025-01307-w.</p>","PeriodicalId":23727,"journal":{"name":"Virtual Reality","volume":"30 1","pages":"44"},"PeriodicalIF":5.0,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12855329/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146107489","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-01Epub Date: 2025-09-25DOI: 10.1007/s10055-025-01222-0
Ivan A Aguilar, Markku Suomalainen, Steven M LaValle, Timo Ojala, Bernhard E Riecke
Telepresence robots offer the promise of remote presence, but user experience, usability, and performance challenges hinder widespread adoption. This study introduces a novel and low-cost user interface for telepresence robots that integrates insights from virtual reality (VR) and robotics to address these limitations. The novel setup was designed holistically, considering several different factors: an inclined rotating chair for embodied rotation, a joystick for precise translation, dual displays for enhanced spatial awareness, and an immersive setup with controlled lighting and audio. A user study (N = 42) with a simulated robot in a virtual environment compared this novel setup with a standard setup, that mimicked the typical user interface of commercial telepresence robots. Results showed that this novel setup significantly improved the user experience, particularly increasing presence, enjoyment, and engagement. This novel setup also improved task performance over time, reducing obstacle collisions and distance traveled. These findings highlight the potential for combining and incorporating insights from VR and robotics to design more effective and user-friendly interfaces for telepresence robots, paving the way for increased adoption.
Supplementary information: The online version contains supplementary material available at 10.1007/s10055-025-01222-0.
{"title":"From \"skype on wheels\" to embodied telepresence: a holistic approach to improving the user experience of telepresence robots.","authors":"Ivan A Aguilar, Markku Suomalainen, Steven M LaValle, Timo Ojala, Bernhard E Riecke","doi":"10.1007/s10055-025-01222-0","DOIUrl":"10.1007/s10055-025-01222-0","url":null,"abstract":"<p><p>Telepresence robots offer the promise of remote presence, but user experience, usability, and performance challenges hinder widespread adoption. This study introduces a novel and low-cost user interface for telepresence robots that integrates insights from virtual reality (VR) and robotics to address these limitations. The novel setup was designed holistically, considering several different factors: an inclined rotating chair for embodied rotation, a joystick for precise translation, dual displays for enhanced spatial awareness, and an immersive setup with controlled lighting and audio. A user study (N = 42) with a simulated robot in a virtual environment compared this novel setup with a standard setup, that mimicked the typical user interface of commercial telepresence robots. Results showed that this novel setup significantly improved the user experience, particularly increasing presence, enjoyment, and engagement. This novel setup also improved task performance over time, reducing obstacle collisions and distance traveled. These findings highlight the potential for combining and incorporating insights from VR and robotics to design more effective and user-friendly interfaces for telepresence robots, paving the way for increased adoption.</p><p><strong>Supplementary information: </strong>The online version contains supplementary material available at 10.1007/s10055-025-01222-0.</p>","PeriodicalId":23727,"journal":{"name":"Virtual Reality","volume":"29 4","pages":"161"},"PeriodicalIF":5.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12464061/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145186822","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-01Epub Date: 2025-03-01DOI: 10.1007/s10055-025-01119-y
Martin Giesel, Daniela Ruseva, Constanze Hesse
Simulated environments, e.g., virtual or augmented reality environments, are becoming increasingly popular for the investigation and training of motor actions. Yet, so far it remains unclear if results of research and training in those environments transfer in the expected way to natural environments. Here, we investigated the types of visual cues that are required to ensure naturalistic hand movements in simulated environments. We compared obstacle avoidance of physical objects with obstacle avoidance of closely matched 2D and 3D images of the physical objects. Participants were asked to reach towards a target position without colliding with obstacles of varying height that were placed in the movement path. Using a pre-test post-test design, we tested obstacle avoidance for 2D and 3D images of obstacles both before and after exposure to the physical obstacles. Consistent with previous findings, we found that participants initially underestimated the magnitude differences between the obstacles, but after exposure to the physical obstacles avoidance performance for the 3D images became similar to performance for the physical obstacles. No such change was found for 2D images. Our findings highlight the importance of disparity cues for naturalistic motor actions in personal space. Furthermore, they suggest that the observed change in obstacle avoidance for 3D images resulted from a calibration of the disparity cues in the 3D images using an accurate estimate of the egocentric distance to the obstacles gained from the interaction with the physical obstacles.
{"title":"Obstacle avoidance of physical, stereoscopic, and pictorial objects.","authors":"Martin Giesel, Daniela Ruseva, Constanze Hesse","doi":"10.1007/s10055-025-01119-y","DOIUrl":"10.1007/s10055-025-01119-y","url":null,"abstract":"<p><p>Simulated environments, e.g., virtual or augmented reality environments, are becoming increasingly popular for the investigation and training of motor actions. Yet, so far it remains unclear if results of research and training in those environments transfer in the expected way to natural environments. Here, we investigated the types of visual cues that are required to ensure naturalistic hand movements in simulated environments. We compared obstacle avoidance of physical objects with obstacle avoidance of closely matched 2D and 3D images of the physical objects. Participants were asked to reach towards a target position without colliding with obstacles of varying height that were placed in the movement path. Using a pre-test post-test design, we tested obstacle avoidance for 2D and 3D images of obstacles both before and after exposure to the physical obstacles. Consistent with previous findings, we found that participants initially underestimated the magnitude differences between the obstacles, but after exposure to the physical obstacles avoidance performance for the 3D images became similar to performance for the physical obstacles. No such change was found for 2D images. Our findings highlight the importance of disparity cues for naturalistic motor actions in personal space. Furthermore, they suggest that the observed change in obstacle avoidance for 3D images resulted from a calibration of the disparity cues in the 3D images using an accurate estimate of the egocentric distance to the obstacles gained from the interaction with the physical obstacles.</p>","PeriodicalId":23727,"journal":{"name":"Virtual Reality","volume":"29 1","pages":"45"},"PeriodicalIF":4.4,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11872779/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143558201","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-01Epub Date: 2025-02-08DOI: 10.1007/s10055-025-01106-3
Nicola Veitch, Claire Donald, Andrew Judge, Christopher Carman, Pamela Scott, Sonya Taylor, Leah Marks, Avril Edmond, Nathan Kirkwood, Neil McDonnell, Fiona Macpherson
Virtual reality (VR) is increasingly being used as a teaching and learning tool, however scaling this technology is difficult due to technological and cost considerations. An alternative approach that helps to address these problems is VR-by-proxy, where teaching takes place within a VR environment that is controlled by one lecturer and broadcast to students online. This allows the content to be accessed without specialist equipment while still offering an immersive and interactive experience. Taking advantage of the enforced move to online learning during the COVID-19 pandemic, this study evaluates the implementation of a novel VR-by-proxy disease diagnostic laboratory VR simulation within an undergraduate life sciences course in a higher education setting. Student participants were randomly allocated into two groups: the test group, who took part in a VR-by-proxy lesson; and a control group, who worked with interactive online lab manual material. We assessed improvement in learning and enjoyment through questionnaires before and after these tasks and collected qualitative data on student attitudes towards VR through focus groups. Our results indicate that although there is no observable difference in learning outcomes between the two groups, students in the test group reported an improved learning experience, confidence and enjoyment of learning. In our focus groups, confidence was understood in two ways by participants: firstly, as 'understanding' of the various steps involved in conducting a quantitative polymerase chain reaction experiment and secondly as a more general 'familiarity' with the laboratory setting. This study adds to the growing body of research into the effectiveness of VR for learning and teaching, highlighting that VR-by-proxy may provide many of the same benefits.
Supplementary information: The online version contains supplementary material available at 10.1007/s10055-025-01106-3.
{"title":"Experiential learning through virtual reality by-proxy.","authors":"Nicola Veitch, Claire Donald, Andrew Judge, Christopher Carman, Pamela Scott, Sonya Taylor, Leah Marks, Avril Edmond, Nathan Kirkwood, Neil McDonnell, Fiona Macpherson","doi":"10.1007/s10055-025-01106-3","DOIUrl":"https://doi.org/10.1007/s10055-025-01106-3","url":null,"abstract":"<p><p>Virtual reality (VR) is increasingly being used as a teaching and learning tool, however scaling this technology is difficult due to technological and cost considerations. An alternative approach that helps to address these problems is VR-by-proxy, where teaching takes place within a VR environment that is controlled by one lecturer and broadcast to students online. This allows the content to be accessed without specialist equipment while still offering an immersive and interactive experience. Taking advantage of the enforced move to online learning during the COVID-19 pandemic, this study evaluates the implementation of a novel VR-by-proxy disease diagnostic laboratory VR simulation within an undergraduate life sciences course in a higher education setting. Student participants were randomly allocated into two groups: the test group, who took part in a VR-by-proxy lesson; and a control group, who worked with interactive online lab manual material. We assessed improvement in learning and enjoyment through questionnaires before and after these tasks and collected qualitative data on student attitudes towards VR through focus groups. Our results indicate that although there is no observable difference in learning outcomes between the two groups, students in the test group reported an improved learning experience, confidence and enjoyment of learning. In our focus groups, confidence was understood in two ways by participants: firstly, as 'understanding' of the various steps involved in conducting a quantitative polymerase chain reaction experiment and secondly as a more general 'familiarity' with the laboratory setting. This study adds to the growing body of research into the effectiveness of VR for learning and teaching, highlighting that VR-by-proxy may provide many of the same benefits.</p><p><strong>Supplementary information: </strong>The online version contains supplementary material available at 10.1007/s10055-025-01106-3.</p>","PeriodicalId":23727,"journal":{"name":"Virtual Reality","volume":"29 1","pages":"38"},"PeriodicalIF":4.4,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11906506/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143651010","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}