Pub Date : 2024-07-06DOI: 10.1007/s10055-024-01028-6
Artem S. Yashin, Daniil S. Lavrov, Eugeny V. Melnichuk, Valery V. Karpov, Darisy G. Zhao, Ignat A. Dubynin
Mobile robots have many applications in the modern world. The autonomy of robots is increasing, but critical cases like search and rescue missions must involve the possibility of human intervention for ethical reasons and safety. To achieve effective human–robot interaction, the operator needs to have a sense of agency (SoA) over the activities of the robot. One possible way to increase one's SoA in remote control could be the use of VR technology. The remote control situation has some important features, so indicators of SoA need to be reproduced there independently. In our study, participants controlled a mobile robot using either a monitor or a VR-headset as an output device. In both cases, active control was contrasted with passive observation of the robot's movement. In each trial, participants estimated the distance traveled by the robot—a putative implicit indicator of SoA. A significant difference between subjective distance estimates was found in the active and passive conditions with the monitor, but not in the active and passive conditions with VR. The effect obtained in the monitor conditions suggests that distance estimates can be used as an implicit indicator of SoA in robot remote control. We believe that the lack of difference between the active and passive conditions in VR was caused by motion sickness due to a mismatch of visual and vestibular sensory cues, leading to a weakened SoA.
{"title":"Robot remote control using virtual reality headset: studying sense of agency with subjective distance estimates","authors":"Artem S. Yashin, Daniil S. Lavrov, Eugeny V. Melnichuk, Valery V. Karpov, Darisy G. Zhao, Ignat A. Dubynin","doi":"10.1007/s10055-024-01028-6","DOIUrl":"https://doi.org/10.1007/s10055-024-01028-6","url":null,"abstract":"<p>Mobile robots have many applications in the modern world. The autonomy of robots is increasing, but critical cases like search and rescue missions must involve the possibility of human intervention for ethical reasons and safety. To achieve effective human–robot interaction, the operator needs to have a sense of agency (SoA) over the activities of the robot. One possible way to increase one's SoA in remote control could be the use of VR technology. The remote control situation has some important features, so indicators of SoA need to be reproduced there independently. In our study, participants controlled a mobile robot using either a monitor or a VR-headset as an output device. In both cases, active control was contrasted with passive observation of the robot's movement. In each trial, participants estimated the distance traveled by the robot—a putative implicit indicator of SoA. A significant difference between subjective distance estimates was found in the active and passive conditions with the monitor, but not in the active and passive conditions with VR. The effect obtained in the monitor conditions suggests that distance estimates can be used as an implicit indicator of SoA in robot remote control. We believe that the lack of difference between the active and passive conditions in VR was caused by motion sickness due to a mismatch of visual and vestibular sensory cues, leading to a weakened SoA.</p>","PeriodicalId":23727,"journal":{"name":"Virtual Reality","volume":"87 1","pages":""},"PeriodicalIF":4.2,"publicationDate":"2024-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141573939","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-02DOI: 10.1007/s10055-024-01026-8
Tae Hee Lee, Young Ju Jeong
Augmented Reality 3D head-up displays use a autostereoscopic 3D display as a panel. The 3D optical unit of autostereoscopic 3D displays controls the direction of the light rays in each pixel, allowing the users enjoy 3D world without glasses. However, these 3D optics cause image quality degradation. Deterioration of resolution has a serious impact on 3D image quality. Therefore, it is important to properly measure the 3D resolution according to 3D optics and analyze its impact. In this study, a method for measuring spatial resolution in 3D displays using contrast modulation is proposed. We describe a conventional 2D resolution measurement methods that are standardized. Based on the existing 2D resolution methods, we propose a 3D resolution method. The spatial and frequency signal responses of 3D displays were investigated. The first method is determined by the predominant frequency series. The second method is conducted by contrast modulation. Through experiments with 3D displays, 3D resolution was measured using the proposed method, and the relationship between the parameters and resolution of 3D optics was examined.
增强现实 3D 抬头显示器使用自动立体 3D 显示器作为面板。自动立体 3D 显示器的 3D 光学单元可控制每个像素的光线方向,让用户无需佩戴眼镜就能享受 3D 世界。然而,这些 3D 光学元件会导致图像质量下降。分辨率下降会严重影响 3D 图像质量。因此,根据 3D 光学技术正确测量 3D 分辨率并分析其影响非常重要。本研究提出了一种利用对比度调制测量 3D 显示器空间分辨率的方法。我们介绍了标准化的传统 2D 分辨率测量方法。在现有 2D 分辨率方法的基础上,我们提出了一种 3D 分辨率方法。我们对 3D 显示器的空间和频率信号响应进行了研究。第一种方法由主要频率序列决定。第二种方法通过对比度调制进行。通过三维显示器的实验,使用所提出的方法测量了三维分辨率,并研究了三维光学参数与分辨率之间的关系。
{"title":"Spatial resolution measurement method for 3D displays from contrast modulation","authors":"Tae Hee Lee, Young Ju Jeong","doi":"10.1007/s10055-024-01026-8","DOIUrl":"https://doi.org/10.1007/s10055-024-01026-8","url":null,"abstract":"<p>Augmented Reality 3D head-up displays use a autostereoscopic 3D display as a panel. The 3D optical unit of autostereoscopic 3D displays controls the direction of the light rays in each pixel, allowing the users enjoy 3D world without glasses. However, these 3D optics cause image quality degradation. Deterioration of resolution has a serious impact on 3D image quality. Therefore, it is important to properly measure the 3D resolution according to 3D optics and analyze its impact. In this study, a method for measuring spatial resolution in 3D displays using contrast modulation is proposed. We describe a conventional 2D resolution measurement methods that are standardized. Based on the existing 2D resolution methods, we propose a 3D resolution method. The spatial and frequency signal responses of 3D displays were investigated. The first method is determined by the predominant frequency series. The second method is conducted by contrast modulation. Through experiments with 3D displays, 3D resolution was measured using the proposed method, and the relationship between the parameters and resolution of 3D optics was examined.</p>","PeriodicalId":23727,"journal":{"name":"Virtual Reality","volume":"154 1","pages":""},"PeriodicalIF":4.2,"publicationDate":"2024-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141530627","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-02DOI: 10.1007/s10055-024-01021-z
Christyan Cruz Ulloa, David Domínguez, Jaime del Cerro, Antonio Barrientos
The development of immersive technologies in recent years has facilitated the control and execution of tasks at a high level of complexity in robotic systems. On the other hand, exploration and manipulation tasks in unknown environments have been one of the main challenges in search and rescue (SAR) robotics. Due to the complexity and uncertainty involved in autonomous manipulation tasks in unstructured environments, these are usually tele-operated initially. This article addresses a comparative study between Mixed Reality (MR—Hololens) and Virtual Reality (VR—HTC-Vive) methods for teleoperating legged-manipulator robots in the context of search and rescue. For this purpose, a teleoperation robotics method was established to address the comparison, developing VR–MR interfaces with the same contextualization and operational functionality for mission management and robot control of a robotic set composed of a quadrupedal robot equipped with a 6 degrees of freedom (6DoF) manipulator, by a user using hand gestures. A set of metrics is proposed for the comparative evaluation of the interfaces considering parameters that allow analyzing operability in the context of the mission (latencies, physical parameters of the equipment, etc.), as well as from the aspect of operator performance (required training, confidence levels, etc.). The experimental phase was conducted using both on-site and remote operations to evaluate and categorize the advantages and disadvantages of each method.
{"title":"Analysis of MR–VR tele-operation methods for legged-manipulator robots","authors":"Christyan Cruz Ulloa, David Domínguez, Jaime del Cerro, Antonio Barrientos","doi":"10.1007/s10055-024-01021-z","DOIUrl":"https://doi.org/10.1007/s10055-024-01021-z","url":null,"abstract":"<p>The development of immersive technologies in recent years has facilitated the control and execution of tasks at a high level of complexity in robotic systems. On the other hand, exploration and manipulation tasks in unknown environments have been one of the main challenges in search and rescue (SAR) robotics. Due to the complexity and uncertainty involved in autonomous manipulation tasks in unstructured environments, these are usually tele-operated initially. This article addresses a comparative study between Mixed Reality (MR—Hololens) and Virtual Reality (VR—HTC-Vive) methods for teleoperating legged-manipulator robots in the context of search and rescue. For this purpose, a teleoperation robotics method was established to address the comparison, developing VR–MR interfaces with the same contextualization and operational functionality for mission management and robot control of a robotic set composed of a quadrupedal robot equipped with a 6 degrees of freedom (6DoF) manipulator, by a user using hand gestures. A set of metrics is proposed for the comparative evaluation of the interfaces considering parameters that allow analyzing operability in the context of the mission (latencies, physical parameters of the equipment, etc.), as well as from the aspect of operator performance (required training, confidence levels, etc.). The experimental phase was conducted using both on-site and remote operations to evaluate and categorize the advantages and disadvantages of each method.</p>","PeriodicalId":23727,"journal":{"name":"Virtual Reality","volume":"121 1","pages":""},"PeriodicalIF":4.2,"publicationDate":"2024-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141514373","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-01DOI: 10.1007/s10055-024-01019-7
Riham Alieldin, Sarah Peyre, Anne Nofziger, Raffaella Borasi
Empathy in healthcare has been associated with positive outcomes such as increased patient satisfaction and reduced medical errors. However, research has indicated a decline in empathy among medical professionals. This study examined the effectiveness of Immersive Virtual Reality (IVR) for empathy training in medical education. A convergent mixed methods pretest posttest design was utilized. Participants were 1st-year medical students who engaged in an empathy training IVR educational intervention around a scenario depicting older adults struggling with social isolation. Jefferson Scale of Empathy (JSE) questionnaire was administered before and after the intervention to measure the change in empathy levels. Data were analyzed using a paired sample t-test on the pre-/post-test JSE empathy scores to assess the change in empathy scores. Nineteen qualitative semi structured interviews were conducted immediately after the IVR experience and follow-up interviews were conducted six months later. Qualitative data collected from the interviews’ transcripts were analyzed using a thematic and content analysis approach to capture individual experiences. Students (n = 19) scored 5.94 points higher on the posttest JSE questionnaire compared to pretest (p < 0.01) indicating an improvement in empathy levels. Qualitative analysis showed that the IVR training was well received by the students as a valuable empathy-teaching tool. Immersion, presence, and embodiment were identified as the main features of IVR technology that enhanced empathy and understanding of patients’ experiences. The debriefing sessions were identified as a key element of the training. IVR-based training could be an effective teaching tool for empathy training in medical education and one that is well received by learners. Results from the study offer preliminary evidence that using IVR to evoke empathy is achievable.
{"title":"Effectiveness of immersive virtual reality in teaching empathy to medical students: a mixed methods study","authors":"Riham Alieldin, Sarah Peyre, Anne Nofziger, Raffaella Borasi","doi":"10.1007/s10055-024-01019-7","DOIUrl":"https://doi.org/10.1007/s10055-024-01019-7","url":null,"abstract":"<p>Empathy in healthcare has been associated with positive outcomes such as increased patient satisfaction and reduced medical errors. However, research has indicated a decline in empathy among medical professionals. This study examined the effectiveness of Immersive Virtual Reality (IVR) for empathy training in medical education. A convergent mixed methods pretest posttest design was utilized. Participants were 1st-year medical students who engaged in an empathy training IVR educational intervention around a scenario depicting older adults struggling with social isolation. Jefferson Scale of Empathy (JSE) questionnaire was administered before and after the intervention to measure the change in empathy levels. Data were analyzed using a paired sample t-test on the pre-/post-test JSE empathy scores to assess the change in empathy scores. Nineteen qualitative semi structured interviews were conducted immediately after the IVR experience and follow-up interviews were conducted six months later. Qualitative data collected from the interviews’ transcripts were analyzed using a thematic and content analysis approach to capture individual experiences. Students (n = 19) scored 5.94 points higher on the posttest JSE questionnaire compared to pretest (p < 0.01) indicating an improvement in empathy levels. Qualitative analysis showed that the IVR training was well received by the students as a valuable empathy-teaching tool. Immersion, presence, and embodiment were identified as the main features of IVR technology that enhanced empathy and understanding of patients’ experiences. The debriefing sessions were identified as a key element of the training. IVR-based training could be an effective teaching tool for empathy training in medical education and one that is well received by learners. Results from the study offer preliminary evidence that using IVR to evoke empathy is achievable.</p>","PeriodicalId":23727,"journal":{"name":"Virtual Reality","volume":"52 1","pages":""},"PeriodicalIF":4.2,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141509036","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-26DOI: 10.1007/s10055-024-01020-0
Sergio Valmorisco, Laura Raya, Alberto Sanchez
The personalization of user experiences through recommendation systems has been extensively explored in Internet applications, but this has yet to be fully addressed in Virtual Reality (VR) environments. The complexity of managing geometric 3D data, computational load, and natural interactions poses significant challenges in real-time adaptation in these immersive experiences. However, tailoring VR environments to individual user needs and interests holds promise for enhancing user experiences. In this paper, we present Virtual Reality Environment Adaptation through Recommendations (VR-EAR), a framework designed to address this challenge. VR-EAR employs customizable object metadata and a hybrid recommendation system modeling implicit user feedback in VR environments. We utilize VR optimization techniques to ensure efficient performance. To evaluate our framework, we designed a virtual store where product locations dynamically adjust based on user interactions. Our results demonstrate the effectiveness of VR-EAR in adapting and personalizing VR environments in real time. domains.
{"title":"Enabling personalized VR experiences: a framework for real-time adaptation and recommendations in VR environments","authors":"Sergio Valmorisco, Laura Raya, Alberto Sanchez","doi":"10.1007/s10055-024-01020-0","DOIUrl":"https://doi.org/10.1007/s10055-024-01020-0","url":null,"abstract":"<p>The personalization of user experiences through recommendation systems has been extensively explored in Internet applications, but this has yet to be fully addressed in Virtual Reality (VR) environments. The complexity of managing geometric 3D data, computational load, and natural interactions poses significant challenges in real-time adaptation in these immersive experiences. However, tailoring VR environments to individual user needs and interests holds promise for enhancing user experiences. In this paper, we present Virtual Reality Environment Adaptation through Recommendations (<i>VR-EAR</i>), a framework designed to address this challenge. <i>VR-EAR</i> employs customizable object metadata and a hybrid recommendation system modeling implicit user feedback in VR environments. We utilize VR optimization techniques to ensure efficient performance. To evaluate our framework, we designed a virtual store where product locations dynamically adjust based on user interactions. Our results demonstrate the effectiveness of <i>VR-EAR</i> in adapting and personalizing VR environments in real time. domains.</p>","PeriodicalId":23727,"journal":{"name":"Virtual Reality","volume":"5 1","pages":""},"PeriodicalIF":4.2,"publicationDate":"2024-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141509038","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-25DOI: 10.1007/s10055-024-01018-8
Meshal Albeedan, Hoshang Kolivanda, Ramy Hammady
Police investigation in real-life crime scenes is an essential aspect of forensic science education. However, the practicality of bringing young investigators to actual crime scenes is often hindered by the costs and challenges involved. In order to overcome these obstacles, new technologies such as mixed reality (MR) are being explored as potential solutions. MR technology offers an interactive and cost-effective way to simulate real-life crime scenes, providing a valuable training experience for young investigators. This paper presents a novel design of a MR system using Microsoft HoloLens 2.0, which is tailored to work in a spatial 3D scanned and reconstructed crime scene using FARO point cloud 3D scanner X130 blended with photogrammetry techniques. The system was developed through the lens of Experiential Learning Theory and designed using a participatory approach, providing a cost-effective solution to help trained Kuwaiti police officers enhance their investigative skills. In order to evaluate the system’s user experience and user interaction, the Questionnaire of User Interaction Satisfaction and User Experience Questionnaire were utilised. Forty-four young police officers evaluated the system. Police students showed positive levels of satisfaction with user interaction and overall user experience with minimal negative feedback. Female students showed higher satisfaction with the overall impression compared to male students. Based on the positive feedback regarding the system expansion, the system will be taken into the commercialisation stage in the future to be provided as an essential tool for crime scene education and investigation practices.
{"title":"Designing and evaluation of a mixed reality system for crime scene investigation training: a hybrid approach","authors":"Meshal Albeedan, Hoshang Kolivanda, Ramy Hammady","doi":"10.1007/s10055-024-01018-8","DOIUrl":"https://doi.org/10.1007/s10055-024-01018-8","url":null,"abstract":"<p>Police investigation in real-life crime scenes is an essential aspect of forensic science education. However, the practicality of bringing young investigators to actual crime scenes is often hindered by the costs and challenges involved. In order to overcome these obstacles, new technologies such as mixed reality (MR) are being explored as potential solutions. MR technology offers an interactive and cost-effective way to simulate real-life crime scenes, providing a valuable training experience for young investigators. This paper presents a novel design of a MR system using Microsoft HoloLens 2.0, which is tailored to work in a spatial 3D scanned and reconstructed crime scene using FARO point cloud 3D scanner X130 blended with photogrammetry techniques. The system was developed through the lens of Experiential Learning Theory and designed using a participatory approach, providing a cost-effective solution to help trained Kuwaiti police officers enhance their investigative skills. In order to evaluate the system’s user experience and user interaction, the Questionnaire of User Interaction Satisfaction and User Experience Questionnaire were utilised. Forty-four young police officers evaluated the system. Police students showed positive levels of satisfaction with user interaction and overall user experience with minimal negative feedback. Female students showed higher satisfaction with the overall impression compared to male students. Based on the positive feedback regarding the system expansion, the system will be taken into the commercialisation stage in the future to be provided as an essential tool for crime scene education and investigation practices.</p>","PeriodicalId":23727,"journal":{"name":"Virtual Reality","volume":"44 1","pages":""},"PeriodicalIF":4.2,"publicationDate":"2024-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141509037","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-25DOI: 10.1007/s10055-024-00994-1
L. Giacomelli, C. Martin Sölch, K. Ledermann
The use of virtual reality (VR) for the management of chronic pain is an intriguing topic. Given the abundance of VR stuies and the numerous opportunities presented by this technology in healthcare, a systematic review that focuses on VR and its applications in chronic pain is necessary to shed light on the various modalities available and their actual effectiveness. This systematic review aims to explore the efficacy of reducing pain and improving pain management through CR interventions for people suffering from chronic pain. Following the PRISMA guidelines, data collection was conducted between December 2020 and February 2021 from the following databases: Cochrane Evidence, JSTOR, Science Direct, PubMed Medline, PubMed NIH, Springer Link, PsychNET, PsychINFO - OVID and PsycARTICLES, Wiley Online Library, Web of Science, ProQuest - MEDLINE®, Sage Journals, NCBI – NLM catalog, Medline OVID, Medline EBSCO, Oxford Handbooks Online, PSYNDEX OVID, Google Scholar. Seventeen articles were included in the qualitative synthesis. Our results highlight that VR interventions, on a global scale, lead to an improvement in pain-related variables, particularly in reducing pain intensity. However, the analyzed articles vary significantly, making them challenging to compare. Future studies could focus on specific types of VR interventions to reduce heterogeneity and conduct a more specific analysis. In conclusion, VR interventions have demonstrated their validity and adaptability as a method for managing chronic pain. Nevertheless, further studies are needed to delve into the various categories of VR interventions in more detail.
{"title":"The effect of virtual reality interventions on reducing pain intensity in chronic pain patients: a systematic review","authors":"L. Giacomelli, C. Martin Sölch, K. Ledermann","doi":"10.1007/s10055-024-00994-1","DOIUrl":"https://doi.org/10.1007/s10055-024-00994-1","url":null,"abstract":"<p>The use of virtual reality (VR) for the management of chronic pain is an intriguing topic. Given the abundance of VR stuies and the numerous opportunities presented by this technology in healthcare, a systematic review that focuses on VR and its applications in chronic pain is necessary to shed light on the various modalities available and their actual effectiveness. This systematic review aims to explore the efficacy of reducing pain and improving pain management through CR interventions for people suffering from chronic pain. Following the PRISMA guidelines, data collection was conducted between December 2020 and February 2021 from the following databases: <i>Cochrane Evidence, JSTOR, Science Direct, PubMed Medline, PubMed NIH, Springer Link, PsychNET, PsychINFO - OVID</i> and <i>PsycARTICLES, Wiley Online Library, Web of Science, ProQuest - MEDLINE®, Sage Journals, NCBI – NLM catalog, Medline OVID, Medline EBSCO, Oxford Handbooks Online, PSYNDEX OVID, Google Scholar.</i> Seventeen articles were included in the qualitative synthesis. Our results highlight that VR interventions, on a global scale, lead to an improvement in pain-related variables, particularly in reducing pain intensity. However, the analyzed articles vary significantly, making them challenging to compare. Future studies could focus on specific types of VR interventions to reduce heterogeneity and conduct a more specific analysis. In conclusion, VR interventions have demonstrated their validity and adaptability as a method for managing chronic pain. Nevertheless, further studies are needed to delve into the various categories of VR interventions in more detail.</p>","PeriodicalId":23727,"journal":{"name":"Virtual Reality","volume":"35 1","pages":""},"PeriodicalIF":4.2,"publicationDate":"2024-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141509039","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-21DOI: 10.1007/s10055-024-01024-w
Sarah Higgins, Stephanie Alcock, Bianca De Aveiro, William Daniels, Harry Farmer, Sahba Besharati
In the wake of the COVID-19 pandemic and the rise of social justice movements, increased attention has been directed to levels of intergroup tension worldwide. Racial prejudice is one such tension that permeates societies and creates distinct inequalities at all levels of our social ecosystem. Whether these prejudices are present explicitly (directly or consciously) or implicitly (unconsciously or automatically), manipulating body ownership by embodying an avatar of another race using immersive virtual reality (IVR) presents a promising approach to reducing racial bias. Nevertheless, research findings are contradictory, which is possibly attributed to variances in methodological factors across studies. This systematic review, therefore, aimed to identify variables and methodological variations that may underlie the observed discrepancies in study outcomes. Adhering to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines, this systematic review encompassed 12 studies that employed IVR and embodiment techniques to investigate racial attitudes. Subsequently, two mini meta-analyses were performed on four and five of these studies, respectively — both of which utilised the Implicit Association Test (IAT) as a metric to gauge these biases. This review demonstrated that IVR allows not only the manipulation of a sense of body ownership but also the investigation of wider social identities. Despite the novelty of IVR as a tool to help understand and possibly reduce racial bias, our review has identified key limitations in the existing literature. Specifically, we found inconsistencies in the measures and IVR equipment and software employed, as well as diversity limitations in demographic characteristics within both the sampled population and the embodiment of avatars. Future studies are needed to address these critical shortcomings. Specific recommendations are suggested, these include: (1) enhancing participant diversity in terms of the sample representation and by integrating ethnically diverse avatars; (2) employing multi-modal methods in assessing embodiment; (3) increasing consistency in the use and administration of implicit and explicit measures of racial prejudice; and (4) implementing consistent approaches in using IVR hardware and software to enhance the realism of the IVR experience.
{"title":"Perspective matters: a systematic review of immersive virtual reality to reduce racial prejudice","authors":"Sarah Higgins, Stephanie Alcock, Bianca De Aveiro, William Daniels, Harry Farmer, Sahba Besharati","doi":"10.1007/s10055-024-01024-w","DOIUrl":"https://doi.org/10.1007/s10055-024-01024-w","url":null,"abstract":"<p>In the wake of the COVID-19 pandemic and the rise of social justice movements, increased attention has been directed to levels of intergroup tension worldwide. Racial prejudice is one such tension that permeates societies and creates distinct inequalities at all levels of our social ecosystem. Whether these prejudices are present explicitly (directly or consciously) or implicitly (unconsciously or automatically), manipulating body ownership by embodying an avatar of another race using immersive virtual reality (IVR) presents a promising approach to reducing racial bias. Nevertheless, research findings are contradictory, which is possibly attributed to variances in methodological factors across studies. This systematic review, therefore, aimed to identify variables and methodological variations that may underlie the observed discrepancies in study outcomes. Adhering to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines, this systematic review encompassed 12 studies that employed IVR and embodiment techniques to investigate racial attitudes. Subsequently, two mini meta-analyses were performed on four and five of these studies, respectively — both of which utilised the Implicit Association Test (IAT) as a metric to gauge these biases. This review demonstrated that IVR allows not only the manipulation of a sense of body ownership but also the investigation of wider social identities. Despite the novelty of IVR as a tool to help understand and possibly reduce racial bias, our review has identified key limitations in the existing literature. Specifically, we found inconsistencies in the measures and IVR equipment and software employed, as well as diversity limitations in demographic characteristics within both the sampled population and the embodiment of avatars. Future studies are needed to address these critical shortcomings. Specific recommendations are suggested, these include: (1) enhancing participant diversity in terms of the sample representation and by integrating ethnically diverse avatars; (2) employing multi-modal methods in assessing embodiment; (3) increasing consistency in the use and administration of implicit and explicit measures of racial prejudice; and (4) implementing consistent approaches in using IVR hardware and software to enhance the realism of the IVR experience.</p>","PeriodicalId":23727,"journal":{"name":"Virtual Reality","volume":"347 1","pages":""},"PeriodicalIF":4.2,"publicationDate":"2024-06-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141509050","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-13DOI: 10.1007/s10055-024-01023-x
Christina-Georgia Serghides, George Christoforides, Nikolas Iakovides, Andreas Aristidou
The rapid technological advancements and the widespread adoption of the internet have diminished the role of the physical library as a main information resource. As the Metaverse is evolving, a revolutionary change is anticipated in how social relationships are perceived, within an educational context. It is therefore necessary for libraries to upgrade the services they provide to keep in line with the technological trends and be a part of this virtual revolution. It is believed that the design and development of a Virtual Reality (VR) library can be the community and knowledge hub the society needs. In this paper, the process of creating a partially digital replica of the Limassol Municipal University Library, a landmark for the city of Limassol, is examined by using photogrammetry and 3D modelling. A 3D platform was developed, where users have the perception that they are experiencing the actual library. To that end, a perceptual study was conducted, to understand the current usage of physical libraries, examine the users’ experience in VR, and identify the requirements and expectations in the development of a virtual library counterpart. Following the suggestions and observations from the perceptual study, five key scenarios were implemented that demonstrate the potential use of a virtual library. This work incorporates the fundamental VR attributes, such as immersiveness, realism, user interactivity and feedback as well as other features, such as animated NPCs, 3D audio, ray-casting and GUIs, that significantly augment the overall VR library user experience, presence as well as navigation autonomy. The main effort of this project was to produce a VR representation of an existing physical library, integrated with its key services, as a proof-of-concept, with emphasis on easy 24/7 access, functionality, and interactivity. The above attributes differentiate this work from existing studies. A detailed user evaluation study was conducted upon completion of the final VR library implementation, which firmly confirmed all its key attributes and future viability.
{"title":"Design and implementation of an interactive virtual library based on its physical counterpart","authors":"Christina-Georgia Serghides, George Christoforides, Nikolas Iakovides, Andreas Aristidou","doi":"10.1007/s10055-024-01023-x","DOIUrl":"https://doi.org/10.1007/s10055-024-01023-x","url":null,"abstract":"<p>The rapid technological advancements and the widespread adoption of the internet have diminished the role of the physical library as a main information resource. As the Metaverse is evolving, a revolutionary change is anticipated in how social relationships are perceived, within an educational context. It is therefore necessary for libraries to upgrade the services they provide to keep in line with the technological trends and be a part of this virtual revolution. It is believed that the design and development of a Virtual Reality (VR) library can be the community and knowledge hub the society needs. In this paper, the process of creating a partially digital replica of the Limassol Municipal University Library, a landmark for the city of Limassol, is examined by using photogrammetry and 3D modelling. A 3D platform was developed, where users have the perception that they are experiencing the actual library. To that end, a perceptual study was conducted, to understand the current usage of physical libraries, examine the users’ experience in VR, and identify the requirements and expectations in the development of a virtual library counterpart. Following the suggestions and observations from the perceptual study, five key scenarios were implemented that demonstrate the potential use of a virtual library. This work incorporates the fundamental VR attributes, such as immersiveness, realism, user interactivity and feedback as well as other features, such as animated NPCs, 3D audio, ray-casting and GUIs, that significantly augment the overall VR library user experience, presence as well as navigation autonomy. The main effort of this project was to produce a VR representation of an existing physical library, integrated with its key services, as a proof-of-concept, with emphasis on easy 24/7 access, functionality, and interactivity. The above attributes differentiate this work from existing studies. A detailed user evaluation study was conducted upon completion of the final VR library implementation, which firmly confirmed all its key attributes and future viability.</p>","PeriodicalId":23727,"journal":{"name":"Virtual Reality","volume":"69 1","pages":""},"PeriodicalIF":4.2,"publicationDate":"2024-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141509051","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-01DOI: 10.1007/s10055-024-01013-z
Shani Kimel Naor, Itay Ketko, Ran Yanovich, Amihai Gottlieb, Yotam Bahat, Oran Ben-Gal, Yuval Heled, Meir Plotnik
Soldiers, athletes, and rescue personnel must often maintain cognitive focus while performing intense, prolonged, and physically demanding activities. The simultaneous activation of cognitive and physical functions can disrupt their performance reciprocally. In the current study, we developed and demonstrated the feasibility of a virtual reality (VR)-based experimental protocol that enables rigorous exploration of the effects of prolonged physical and cognitive efforts. A battery of established neurocognitive tests was used to compare novel cognitive tasks to simulated loaded marches. We simulated a 10-km loaded march in our virtual reality environment, with or without integrated cognitive tasks (VR-COG). During three experimental visits, participants were evaluated pre- and post-activity, including the Color Trail Test (CTT), the Synthetic Work Environment (SYNWIN) battery for assessing multitasking, and physical tests (i.e., time to exhaustion). Results show that Strong or moderate correlations (r ≥ 0.58, p ≤ 0.05) were found between VR-COG scores and scores on the cognitive tests. Both the SYNWIN and CTT showed no condition effects but significant time effects, indicating better performance in the post-activity assessment than in the pre-activity assessment. This novel protocol can contribute to our understanding of physical-cognitive interactions, since virtual environments are ideal for studying high performance professional activity in realistic but controlled settings.
{"title":"Bringing the field into the lab: a novel virtual reality outdoor march simulator for evaluating cognitive and physical performance","authors":"Shani Kimel Naor, Itay Ketko, Ran Yanovich, Amihai Gottlieb, Yotam Bahat, Oran Ben-Gal, Yuval Heled, Meir Plotnik","doi":"10.1007/s10055-024-01013-z","DOIUrl":"https://doi.org/10.1007/s10055-024-01013-z","url":null,"abstract":"<p>Soldiers, athletes, and rescue personnel must often maintain cognitive focus while performing intense, prolonged, and physically demanding activities. The simultaneous activation of cognitive and physical functions can disrupt their performance reciprocally. In the current study, we developed and demonstrated the feasibility of a virtual reality (VR)-based experimental protocol that enables rigorous exploration of the effects of prolonged physical and cognitive efforts. A battery of established neurocognitive tests was used to compare novel cognitive tasks to simulated loaded marches. We simulated a 10-km loaded march in our virtual reality environment, with or without integrated cognitive tasks (VR-COG). During three experimental visits, participants were evaluated pre- and post-activity, including the Color Trail Test (CTT), the Synthetic Work Environment (SYNWIN) battery for assessing multitasking, and physical tests (i.e., time to exhaustion). Results show that Strong or moderate correlations (r ≥ 0.58, <i>p</i> ≤ 0.05) were found between VR-COG scores and scores on the cognitive tests. Both the SYNWIN and CTT showed no condition effects but significant time effects, indicating better performance in the post-activity assessment than in the pre-activity assessment. This novel protocol can contribute to our understanding of physical-cognitive interactions, since virtual environments are ideal for studying high performance professional activity in realistic but controlled settings.</p>","PeriodicalId":23727,"journal":{"name":"Virtual Reality","volume":"15 1","pages":""},"PeriodicalIF":4.2,"publicationDate":"2024-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141192517","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}