Pub Date : 2024-08-12DOI: 10.1007/s10055-024-01042-8
Walter Terkaj, Marcello Urgo, Péter Kovács, Erik Tóth, Marta Mondellini
Advances in digital factory technologies are offering great potential to innovate higher education, by enabling innovative learning approaches based on virtual laboratories that increase the involvement of students while delivering realistic experiences. This article introduces a framework for the development of virtual learning applications by addressing multidisciplinary requirements. The implementation of the framework can be eased by the use of the proposed virtual learning factory application (VLFA), an open-source solution that takes advantage of virtual reality to support innovative higher-education learning activities in industrial engineering. A complete design and development workflow is described, starting from the identification of the requirements, to the design of software modules and underlying technologies, up to the final implementation. The framework and the VLFA have been tested to implement a serious game related to the design and analysis of manufacturing systems, also collecting the feedback of students and teachers.
{"title":"A framework for virtual learning in industrial engineering education: development of a reconfigurable virtual learning factory application","authors":"Walter Terkaj, Marcello Urgo, Péter Kovács, Erik Tóth, Marta Mondellini","doi":"10.1007/s10055-024-01042-8","DOIUrl":"https://doi.org/10.1007/s10055-024-01042-8","url":null,"abstract":"<p>Advances in digital factory technologies are offering great potential to innovate higher education, by enabling innovative learning approaches based on virtual laboratories that increase the involvement of students while delivering realistic experiences. This article introduces a framework for the development of virtual learning applications by addressing multidisciplinary requirements. The implementation of the framework can be eased by the use of the proposed virtual learning factory application (VLFA), an open-source solution that takes advantage of virtual reality to support innovative higher-education learning activities in industrial engineering. A complete design and development workflow is described, starting from the identification of the requirements, to the design of software modules and underlying technologies, up to the final implementation. The framework and the VLFA have been tested to implement a serious game related to the design and analysis of manufacturing systems, also collecting the feedback of students and teachers.</p>","PeriodicalId":23727,"journal":{"name":"Virtual Reality","volume":"13 1","pages":""},"PeriodicalIF":4.2,"publicationDate":"2024-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141949325","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-05DOI: 10.1007/s10055-024-01029-5
Patrice Piette, Emilie Leblong, Romain Cavagna, Albert Murienne, Bastien Fraudet, Philippe Gallien
Virtual rehabilitation using Virtual Reality (VR) technology is a promising novel approach to rehabilitation. However, postural responses in VR differ significantly from real life. The introduction of an avatar or visual cues in VR could help rectify this difference. An initial session was used to assess static and dynamic balance performances between VR and real life to set the reference values. A second session involved three VR conditions applied in a randomised order: i.e. full-body avatar, enhanced visual cues, or a combination of both conditions. Performances of the centre of pressure (COP) were recorded on a force plate. Seventy (70) people took part in the first session and 74 in the second. During the first session, a significant difference was observed in left static, right static and right dynamic COP distance (respectively SMD = − 0.40 [− 0.73, − 0.06], p = 0.02, − 0.33 [− 0.67, 0.00], p = 0.05, SMD = − 0.61 [− 0.95, − 0.27], p < 0.001) and a non-significant difference in the left dynamic, SMD = − 0.22 [− 0.56, 0.11], p = 0.19). During the second session it was observed that this difference was corrected mainly by reinforced visual information and to a lesser extent by the presence of a full-body avatar. Balance disruption triggered by the use of virtual reality can be offset by vertical visual information and/or by the presence of a full-body avatar. Further research is required on the effects of a full-body avatar.
{"title":"A comparison of balance between real and virtual environments: differences, role of visual cues and full-body avatars, a quasi-experimental clinical study","authors":"Patrice Piette, Emilie Leblong, Romain Cavagna, Albert Murienne, Bastien Fraudet, Philippe Gallien","doi":"10.1007/s10055-024-01029-5","DOIUrl":"https://doi.org/10.1007/s10055-024-01029-5","url":null,"abstract":"<p>Virtual rehabilitation using Virtual Reality (VR) technology is a promising novel approach to rehabilitation. However, postural responses in VR differ significantly from real life. The introduction of an avatar or visual cues in VR could help rectify this difference. An initial session was used to assess static and dynamic balance performances between VR and real life to set the reference values. A second session involved three VR conditions applied in a randomised order: i.e. full-body avatar, enhanced visual cues, or a combination of both conditions. Performances of the centre of pressure (COP) were recorded on a force plate. Seventy (70) people took part in the first session and 74 in the second. During the first session, a significant difference was observed in left static, right static and right dynamic COP distance (respectively SMD = − 0.40 [− 0.73, − 0.06], <i>p</i> = 0.02, − 0.33 [− 0.67, 0.00], <i>p</i> = 0.05, SMD = − 0.61 [− 0.95, − 0.27], <i>p</i> < 0.001) and a non-significant difference in the left dynamic, SMD = − 0.22 [− 0.56, 0.11], <i>p</i> = 0.19). During the second session it was observed that this difference was corrected mainly by reinforced visual information and to a lesser extent by the presence of a full-body avatar. Balance disruption triggered by the use of virtual reality can be offset by vertical visual information and/or by the presence of a full-body avatar. Further research is required on the effects of a full-body avatar.</p>","PeriodicalId":23727,"journal":{"name":"Virtual Reality","volume":"2 1","pages":""},"PeriodicalIF":4.2,"publicationDate":"2024-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141949326","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-02DOI: 10.1007/s10055-024-01040-w
Inoussa Ouedraogo, Huyen Nguyen, Patrick Bourdot
Although Augmented Reality (AR) has been extensively studied in supporting Immersive Analytics (IA), there are still many challenges in visualising and interacting with big and complex datasets. To deal with these datasets, most AR applications utilise NoSQL databases for storing and querying data, especially for managing large volumes of unstructured or semi-structured data. However, NoSQL databases have limitations in their reasoning and inference capabilities, which can result in insufficient support for certain types of queries. To fill this gap, we aim to explore and evaluate whether an intelligent approach based on ontology and linked data can facilitate visual analytics tasks with big datasets on AR interface. We designed and implemented a prototype of this method for meteorological data analytics. An experiment was conducted to evaluate the use of a semantic database with linked data compared to a conventional approach in an AR-based immersive analytics system. The results significantly highlight the performance of semantic approach in helping the users analysing meteorological datasets and their subjective appreciation in working with the AR interface, which is enhanced with ontology and linked data.
尽管在支持沉浸式分析(IA)方面对增强现实(AR)进行了广泛研究,但在大型复杂数据集的可视化和交互方面仍存在许多挑战。为了处理这些数据集,大多数 AR 应用程序都利用 NoSQL 数据库来存储和查询数据,尤其是管理大量非结构化或半结构化数据。然而,NoSQL 数据库在推理和推论能力方面存在局限性,可能导致对某些类型的查询支持不足。为了填补这一空白,我们旨在探索和评估一种基于本体和链接数据的智能方法能否促进 AR 界面上的大数据集可视化分析任务。我们为气象数据分析设计并实现了这种方法的原型。我们进行了一项实验,以评估在基于 AR 的沉浸式分析系统中使用语义数据库和链接数据与传统方法的对比情况。实验结果极大地突出了语义方法在帮助用户分析气象数据集方面的性能,以及用户对使用本体和链接数据增强的 AR 界面的主观评价。
{"title":"Immersive analytics with augmented reality in meteorology: an exploratory study on ontology and linked data","authors":"Inoussa Ouedraogo, Huyen Nguyen, Patrick Bourdot","doi":"10.1007/s10055-024-01040-w","DOIUrl":"https://doi.org/10.1007/s10055-024-01040-w","url":null,"abstract":"<p>Although Augmented Reality (AR) has been extensively studied in supporting Immersive Analytics (IA), there are still many challenges in visualising and interacting with big and complex datasets. To deal with these datasets, most AR applications utilise NoSQL databases for storing and querying data, especially for managing large volumes of unstructured or semi-structured data. However, NoSQL databases have limitations in their reasoning and inference capabilities, which can result in insufficient support for certain types of queries. To fill this gap, we aim to explore and evaluate whether an intelligent approach based on ontology and linked data can facilitate visual analytics tasks with big datasets on AR interface. We designed and implemented a prototype of this method for meteorological data analytics. An experiment was conducted to evaluate the use of a semantic database with linked data compared to a conventional approach in an AR-based immersive analytics system. The results significantly highlight the performance of semantic approach in helping the users analysing meteorological datasets and their subjective appreciation in working with the AR interface, which is enhanced with ontology and linked data.</p>","PeriodicalId":23727,"journal":{"name":"Virtual Reality","volume":"14 1","pages":""},"PeriodicalIF":4.2,"publicationDate":"2024-08-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141881479","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-01DOI: 10.1007/s10055-024-01039-3
Woojin Cho, Taewook Ha, Ikbeom Jeon, Jinwoo Jeon, Tae-Kyun Kim, Woontack Woo
We propose a robust 3D hand tracking system in various hand action environments, including hand-object interaction, which utilizes a single color image and a previous pose prediction as input. We observe that existing methods deterministically exploit temporal information in motion space, failing to address realistic diverse hand motions. Also, prior methods paid less attention to efficiency as well as robust performance, i.e., the balance issues between time and accuracy. The Temporally Enhanced Graph Convolutional Network (TE-GCN) utilizes a 2-stage framework to encode temporal information adaptively. The system establishes balance by adopting an adaptive GCN, which effectively learns the spatial dependency between hand mesh vertices. Furthermore, the system leverages the previous prediction by estimating the relevance across image features through the attention mechanism. The proposed method achieves state-of-the-art balanced performance on challenging benchmarks and demonstrates robust results on various hand motions in real scenes. Moreover, the hand tracking system is integrated into a recent HMD with an off-loading framework, achieving a real-time framerate while maintaining high performance. Our study improves the usability of a high-performance hand-tracking method, which can be generalized to other algorithms and contributes to the usage of HMD in everyday life. Our code with the HMD project will be available at https://github.com/UVR-WJCHO/TEGCN_on_Hololens2.
{"title":"Temporally enhanced graph convolutional network for hand tracking from an egocentric camera","authors":"Woojin Cho, Taewook Ha, Ikbeom Jeon, Jinwoo Jeon, Tae-Kyun Kim, Woontack Woo","doi":"10.1007/s10055-024-01039-3","DOIUrl":"https://doi.org/10.1007/s10055-024-01039-3","url":null,"abstract":"<p>We propose a robust 3D hand tracking system in various hand action environments, including hand-object interaction, which utilizes a single color image and a previous pose prediction as input. We observe that existing methods deterministically exploit temporal information in motion space, failing to address realistic diverse hand motions. Also, prior methods paid less attention to efficiency as well as robust performance, i.e., the balance issues between time and accuracy. The Temporally Enhanced Graph Convolutional Network (TE-GCN) utilizes a 2-stage framework to encode temporal information adaptively. The system establishes balance by adopting an adaptive GCN, which effectively learns the spatial dependency between hand mesh vertices. Furthermore, the system leverages the previous prediction by estimating the relevance across image features through the attention mechanism. The proposed method achieves state-of-the-art balanced performance on challenging benchmarks and demonstrates robust results on various hand motions in real scenes. Moreover, the hand tracking system is integrated into a recent HMD with an off-loading framework, achieving a real-time framerate while maintaining high performance. Our study improves the usability of a high-performance hand-tracking method, which can be generalized to other algorithms and contributes to the usage of HMD in everyday life. Our code with the HMD project will be available at https://github.com/UVR-WJCHO/TEGCN_on_Hololens2.</p>","PeriodicalId":23727,"journal":{"name":"Virtual Reality","volume":"2019 1","pages":""},"PeriodicalIF":4.2,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141866649","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-27DOI: 10.1007/s10055-024-01041-9
Ruowei Xiao, Rongzheng Zhang, Oğuz Buruk, Juho Hamari, Johanna Virkki
Mixed reality (MR) games refer to games that integrate physical entities with digitally mediated contents. Currently, it entails game creators to integrate heterogeneous virtual and physical components, which is often time-consuming and labor-intensive, without the support of a coherent technology stack. The underlying technodiversity manifested by the research corpus suggests a complicated, multi-dimensional design space that goes beyond merely technical concerns. In this research, we adopted a research-through-design approach and proposed an MR game technology stack that facilitates flexible, low-code game development. As design grounding, we first surveyed 34 state-of-the-art studies, and results were synergized into three different spectra of technological affordances, respectively activity range, user interface and feedback control, to inform our next design process. We then went through an iterative prototyping phase and implemented an MR game development toolset. A co-design workshop was conducted, where we invited 15 participants to try the prototype tools and co-ideate the potential use scenarios for the proposed technology stack. First-hand user feedback was collected via questionnaires and semi-structured interviews. As a result, four conceptual game designs with three major design implications were generated, which conjointly reflect a broader understanding on MR gameful experience and contribute fresh insights to this emerging research domain.
{"title":"Toward next generation mixed reality games: a research through design approach","authors":"Ruowei Xiao, Rongzheng Zhang, Oğuz Buruk, Juho Hamari, Johanna Virkki","doi":"10.1007/s10055-024-01041-9","DOIUrl":"https://doi.org/10.1007/s10055-024-01041-9","url":null,"abstract":"<p>Mixed reality (MR) games refer to games that integrate physical entities with digitally mediated contents. Currently, it entails game creators to integrate heterogeneous virtual and physical components, which is often time-consuming and labor-intensive, without the support of a coherent technology stack. The underlying technodiversity manifested by the research corpus suggests a complicated, multi-dimensional design space that goes beyond merely technical concerns. In this research, we adopted a research-through-design approach and proposed an MR game technology stack that facilitates flexible, low-code game development. As design grounding, we first surveyed 34 state-of-the-art studies, and results were synergized into three different spectra of technological affordances, respectively activity range, user interface and feedback control, to inform our next design process. We then went through an iterative prototyping phase and implemented an MR game development toolset. A co-design workshop was conducted, where we invited 15 participants to try the prototype tools and co-ideate the potential use scenarios for the proposed technology stack. First-hand user feedback was collected via questionnaires and semi-structured interviews. As a result, four conceptual game designs with three major design implications were generated, which conjointly reflect a broader understanding on MR gameful experience and contribute fresh insights to this emerging research domain.</p>","PeriodicalId":23727,"journal":{"name":"Virtual Reality","volume":"63 1","pages":""},"PeriodicalIF":4.2,"publicationDate":"2024-07-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141781336","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-24DOI: 10.1007/s10055-024-01033-9
Ali Buwaider, Victor Gabriel El-Hajj, Alessandro Iop, Mario Romero, Walter C Jean, Erik Edström, Adrian Elmi-Terander
External ventricular drain (EVD) insertion using the freehand technique is often associated with misplacements resulting in unfavorable outcomes. Augmented Reality (AR) has been increasingly used to complement conventional neuronavigation. The accuracy of AR guided EVD insertion has been investigated in several studies, on anthropomorphic phantoms, cadavers, and patients. This review aimed to assess the current knowledge and discuss potential benefits and challenges associated with AR guidance in EVD insertion. MEDLINE, EMBASE, and Web of Science were searched from inception to August 2023 for studies evaluating the accuracy of AR guidance for EVD insertion. Studies were screened for eligibility and accuracy data was extracted. The risk of bias was assessed using the Cochrane Risk of Bias Tool and the quality of evidence was assessed using the Newcastle-Ottawa-Scale. Accuracy was reported either as the average deviation from target or according to the Kakarla grading system. Of the 497 studies retrieved, 14 were included for analysis. All included studies were prospectively designed. Insertions were performed on anthropomorphic phantoms, cadavers, or patients, using several different AR devices and interfaces. Deviation from target ranged between 0.7 and 11.9 mm. Accuracy according to the Kakarla grading scale ranged between 82 and 96%. Accuracy was higher for AR compared to the freehand technique in all studies that had control groups. Current evidence demonstrates that AR is more accurate than free-hand technique for EVD insertion. However, studies are few, the technology developing, and there is a need for further studies on patients in relevant clinical settings.
使用徒手技术插入心室外引流管(EVD)经常会发生错位,导致不良后果。增强现实技术(AR)被越来越多地用于补充传统的神经导航。多项研究已在拟人模型、尸体和患者身上对 AR 引导 EVD 插入的准确性进行了调查。本综述旨在评估现有知识,并讨论与 AR 引导 EVD 插入相关的潜在优势和挑战。从开始到 2023 年 8 月,我们在 MEDLINE、EMBASE 和 Web of Science 上检索了评估 AR 引导 EVD 插入准确性的研究。对研究进行了资格筛选,并提取了准确性数据。使用 Cochrane 偏倚风险工具评估偏倚风险,使用纽卡斯尔-渥太华量表评估证据质量。准确性以平均偏离目标值或根据卡卡拉分级系统进行报告。在检索到的 497 项研究中,有 14 项被纳入分析。所有纳入的研究均为前瞻性设计。使用几种不同的 AR 设备和界面,在拟人化模型、尸体或患者身上进行了植入。与目标的偏差在 0.7 至 11.9 毫米之间。根据卡卡拉分级标准,准确率在 82% 到 96% 之间。在所有有对照组的研究中,AR 的精确度均高于徒手技术。目前的证据表明,在插入 EVD 时,AR 比徒手技术更准确。然而,研究还很少,技术还在发展中,还需要在相关临床环境中对患者进行进一步研究。
{"title":"Augmented reality navigation in external ventricular drain insertion—a systematic review and meta-analysis","authors":"Ali Buwaider, Victor Gabriel El-Hajj, Alessandro Iop, Mario Romero, Walter C Jean, Erik Edström, Adrian Elmi-Terander","doi":"10.1007/s10055-024-01033-9","DOIUrl":"https://doi.org/10.1007/s10055-024-01033-9","url":null,"abstract":"<p>External ventricular drain (EVD) insertion using the freehand technique is often associated with misplacements resulting in unfavorable outcomes. Augmented Reality (AR) has been increasingly used to complement conventional neuronavigation. The accuracy of AR guided EVD insertion has been investigated in several studies, on anthropomorphic phantoms, cadavers, and patients. This review aimed to assess the current knowledge and discuss potential benefits and challenges associated with AR guidance in EVD insertion. MEDLINE, EMBASE, and Web of Science were searched from inception to August 2023 for studies evaluating the accuracy of AR guidance for EVD insertion. Studies were screened for eligibility and accuracy data was extracted. The risk of bias was assessed using the Cochrane Risk of Bias Tool and the quality of evidence was assessed using the Newcastle-Ottawa-Scale. Accuracy was reported either as the average deviation from target or according to the Kakarla grading system. Of the 497 studies retrieved, 14 were included for analysis. All included studies were prospectively designed. Insertions were performed on anthropomorphic phantoms, cadavers, or patients, using several different AR devices and interfaces. Deviation from target ranged between 0.7 and 11.9 mm. Accuracy according to the Kakarla grading scale ranged between 82 and 96%. Accuracy was higher for AR compared to the freehand technique in all studies that had control groups. Current evidence demonstrates that AR is more accurate than free-hand technique for EVD insertion. However, studies are few, the technology developing, and there is a need for further studies on patients in relevant clinical settings.</p>","PeriodicalId":23727,"journal":{"name":"Virtual Reality","volume":"47 1","pages":""},"PeriodicalIF":4.2,"publicationDate":"2024-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141781331","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-23DOI: 10.1007/s10055-024-01035-7
H. A. T. van Limpt-Broers, M. Postma, E. van Weelden, S. Pratesi, M. M. Louwerse
The Overview Effect is a complex experience reported by astronauts after viewing Earth from space. Numerous accounts suggest that it leads to increased interconnectedness to other human beings and environmental awareness, comparable to self-transcendence. It can cause fundamental changes in mental models of the world, improved well-being, and stronger appreciation of, and responsibility for Earth. From a cognitive perspective, it is closely linked to the emotion of awe, possibly triggered by the overwhelming perceived vastness of the universe. Given that most research in the domain focuses on self-reports, little is known about potential neurophysiological markers of the Overview Effect. In the experiment reported here, participants viewed an immersive Virtual Reality simulation of a space journey while their brain activity was recorded using electroencephalography (EEG). Post-experimental self-reports confirmed they were able to experience the Overview Effect in the simulated environment. EEG recordings revealed lower spectral power in beta and gamma frequency bands during the defining moments of the Overview Effect. The decrease in spectral power can be associated with reduced mental processing, and a disruption of known mental structures in this context, thereby providing more evidence for the cognitive effects of the experience.
{"title":"Neurophysiological evidence for the overview effect: a virtual reality journey into space","authors":"H. A. T. van Limpt-Broers, M. Postma, E. van Weelden, S. Pratesi, M. M. Louwerse","doi":"10.1007/s10055-024-01035-7","DOIUrl":"https://doi.org/10.1007/s10055-024-01035-7","url":null,"abstract":"<p>The Overview Effect is a complex experience reported by astronauts after viewing Earth from space. Numerous accounts suggest that it leads to increased interconnectedness to other human beings and environmental awareness, comparable to self-transcendence. It can cause fundamental changes in mental models of the world, improved well-being, and stronger appreciation of, and responsibility for Earth. From a cognitive perspective, it is closely linked to the emotion of awe, possibly triggered by the overwhelming perceived vastness of the universe. Given that most research in the domain focuses on self-reports, little is known about potential neurophysiological markers of the Overview Effect. In the experiment reported here, participants viewed an immersive Virtual Reality simulation of a space journey while their brain activity was recorded using electroencephalography (EEG). Post-experimental self-reports confirmed they were able to experience the Overview Effect in the simulated environment. EEG recordings revealed lower spectral power in beta and gamma frequency bands during the defining moments of the Overview Effect. The decrease in spectral power can be associated with reduced mental processing, and a disruption of known mental structures in this context, thereby providing more evidence for the cognitive effects of the experience.</p>","PeriodicalId":23727,"journal":{"name":"Virtual Reality","volume":"245 1","pages":""},"PeriodicalIF":4.2,"publicationDate":"2024-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141781332","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-19DOI: 10.1007/s10055-024-01027-7
Mantaj Singh, Peter Smitham, Suyash Jain, Christopher Day, Thomas Nijman, Dan George, David Neilly, Justin de Blasio, Michael Gilmore, Tiffany K. Gill, Susanna Proudman, Gavin Nimon
Knee arthrocentesis is a simple procedure commonly performed by general practitioners and junior doctors. As such, doctors should be competent and comfortable in performing the technique by themselves; however, they need to be adequately trained. The best method to ensure practitioner proficiency is by optimizing teaching at an institutional level, thus, educating all future doctors in the procedure. However, the Coronavirus Disease 19 (COVID-19) pandemic caused significant disruption to hospital teaching for medical students which necessitated investigating the effectiveness of virtual reality (VR) as a platform to emulate hospital teaching of knee arthrocentesis. A workshop was conducted with 100 fourth year medical students divided into three Groups: A, B and C, each receiving a pre-reading online lecture. Group A was placed in an Objective Structured Clinical Examination (OSCE) station where they were assessed by a blinded orthopaedic surgeon using the OSCE assessment rubric. Group B undertook a hands-on practice station prior to assessment, while Group C received a VR video (courtesy of the University of Adelaide’s Health Simulation) in the form of VR headset or 360° surround immersion room and hands-on station followed by the OSCE. Upon completion of the workshop, students completed a questionnaire on their confidence with the procedure and the practicality of the VR station. OSCE scores were compared between Groups B and C to investigate the educational value of VR teaching. On average, students with VR headsets reported higher confidence with the procedure and were more inclined to undertake it on their own. Students in Group C who used the VR station prior to assessment scored higher than the non-VR Groups (Group A, 56%; Group B, 67%; Group C 83%). Students in Group A had statistically significant results on average compared to those in Group B (t(69) = 3.003, p = 0.003), as do students in Group B compared to Group C (t(62) = 5.400, p < 0.001). Within Group C students who were given VR headsets scored higher than immersion room students. The VR headset was beneficial in providing students with a representation of how knee arthrocentesis may be conducted in the hospital setting. While VR will not replace conventional in-hospital teaching, given current technological limitations, it serves as an effective teaching aid for arthrocentesis and has many other potential applications for a wide scope of medicine and surgical training.
膝关节穿刺术是一种简单的手术,通常由全科医生和初级医生实施。因此,医生应该能够胜任并自如地自行实施该技术,但他们需要接受适当的培训。确保医生熟练掌握的最佳方法是优化机构层面的教学,从而教育所有未来的医生掌握这一程序。然而,冠状病毒疾病 19(COVID-19)大流行严重干扰了医学生的医院教学,因此有必要研究虚拟现实(VR)作为模拟膝关节穿刺术医院教学平台的有效性。100 名四年级医学生被分成三组参加了一个研讨会:A、B、C三组,每组均接受预习在线讲座。A 组被安排在客观结构化临床考试(OSCE)站,由一名骨科医生使用 OSCE 评估标准对他们进行盲法评估。B 组在评估前进行了动手实践,而 C 组则接受了 VR 视频(由阿德莱德大学健康模拟提供),形式为 VR 头显或 360° 环绕沉浸式房间和动手实践站,然后进行 OSCE。研讨会结束后,学生们填写了一份调查问卷,内容涉及他们对程序的信心以及 VR 工作站的实用性。对 B 组和 C 组的 OSCE 分数进行了比较,以研究 VR 教学的教育价值。平均而言,使用 VR 头显的学生对该程序的信心更高,更愿意自己进行操作。在评估前使用 VR 站的 C 组学生的得分高于非 VR 组(A 组,56%;B 组,67%;C 组,83%)。与 B 组相比,A 组学生的平均成绩有显著的统计学意义(t(69) = 3.003,p = 0.003),与 C 组相比,B 组学生的平均成绩也有显著的统计学意义(t(62) = 5.400,p <0.001)。在 C 组中,获得 VR 头显的学生得分高于浸入式教室的学生。VR 头显有利于让学生了解膝关节穿刺术在医院环境中是如何进行的。虽然鉴于目前技术的局限性,VR 无法取代传统的院内教学,但它可以作为关节穿刺术的有效教学辅助工具,并在医学和外科培训的广泛领域中具有许多其他潜在应用。
{"title":"Exploring the viability of Virtual Reality as a teaching method for knee aspiration","authors":"Mantaj Singh, Peter Smitham, Suyash Jain, Christopher Day, Thomas Nijman, Dan George, David Neilly, Justin de Blasio, Michael Gilmore, Tiffany K. Gill, Susanna Proudman, Gavin Nimon","doi":"10.1007/s10055-024-01027-7","DOIUrl":"https://doi.org/10.1007/s10055-024-01027-7","url":null,"abstract":"<p>Knee arthrocentesis is a simple procedure commonly performed by general practitioners and junior doctors. As such, doctors should be competent and comfortable in performing the technique by themselves; however, they need to be adequately trained. The best method to ensure practitioner proficiency is by optimizing teaching at an institutional level, thus, educating all future doctors in the procedure. However, the Coronavirus Disease 19 (COVID-19) pandemic caused significant disruption to hospital teaching for medical students which necessitated investigating the effectiveness of virtual reality (VR) as a platform to emulate hospital teaching of knee arthrocentesis. A workshop was conducted with 100 fourth year medical students divided into three Groups: A, B and C, each receiving a pre-reading online lecture. Group A was placed in an Objective Structured Clinical Examination (OSCE) station where they were assessed by a blinded orthopaedic surgeon using the OSCE assessment rubric. Group B undertook a hands-on practice station prior to assessment, while Group C received a VR video (courtesy of the University of Adelaide’s Health Simulation) in the form of VR headset or 360° surround immersion room and hands-on station followed by the OSCE. Upon completion of the workshop, students completed a questionnaire on their confidence with the procedure and the practicality of the VR station. OSCE scores were compared between Groups B and C to investigate the educational value of VR teaching. On average, students with VR headsets reported higher confidence with the procedure and were more inclined to undertake it on their own. Students in Group C who used the VR station prior to assessment scored higher than the non-VR Groups (Group A, 56%; Group B, 67%; Group C 83%). Students in Group A had statistically significant results on average compared to those in Group B (t(69) = 3.003, <i>p</i> = 0.003), as do students in Group B compared to Group C (t(62) = 5.400, <i>p</i> < 0.001). Within Group C students who were given VR headsets scored higher than immersion room students. The VR headset was beneficial in providing students with a representation of how knee arthrocentesis may be conducted in the hospital setting. While VR will not replace conventional in-hospital teaching, given current technological limitations, it serves as an effective teaching aid for arthrocentesis and has many other potential applications for a wide scope of medicine and surgical training.</p>","PeriodicalId":23727,"journal":{"name":"Virtual Reality","volume":"12 1","pages":""},"PeriodicalIF":4.2,"publicationDate":"2024-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141742636","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-18DOI: 10.1007/s10055-024-01038-4
Dominik Spinczyk, Grzegorz Rosiak, Krzysztof Milczarek, Dariusz Konecki, Jarosław Żyłkowski, Jakub Franke, Maciej Pech, Karl Rohmer, Karol Zaczkowski, Ania Wolińska-Sołtys, Piotr Sperka, Dawid Hajda, Ewa Piętka
In recent years, we have observed a rise in the popularity of minimally invasive procedures for treating liver tumours, with percutaneous thermoablation being one of them, conducted using image-guided navigation systems with mixed reality technology. However, the application of this method requires adequate training in using the employed system. In our study, we assessed which skills pose the greatest challenges in performing such procedures. The article proposes a training module characterized by an innovative approach: the possibility of practicing the diagnosis, planning, execution stages and the physical possibility of performing the execution stage on the radiological phantom of the abdominal cavity. The proposed approach was evaluated by designing a set of 4 exercises corresponding to the 3 phases mentioned. To the research group included 10 radiologists and 5 residents in the study. Based on 20 clinical cases of liver tumors subjected to percutaneous thermoablation, we developed assessment tasks evaluating four skill categories: head-mounted display (HMD), ultrasound (US)/computed tomography (CT) image fusion interpretation, tracking system use, and the ability to insert a needle. The results were presented using the Likert scale. The results of our study indicate that the most challenging aspect for radiology specialists is adapting to HMD gesture control, while residents point to intraoperative images of fusion and respiratory movements in the liver as the most problematic. In terms of improving the ability to perform procedures on new patients, the module also allows you to create a new hologram for a different clinical case.
{"title":"Towards overcoming barriers to the clinical deployment of mixed reality image-guided navigation systems supporting percutaneous ablation of liver focal lesions","authors":"Dominik Spinczyk, Grzegorz Rosiak, Krzysztof Milczarek, Dariusz Konecki, Jarosław Żyłkowski, Jakub Franke, Maciej Pech, Karl Rohmer, Karol Zaczkowski, Ania Wolińska-Sołtys, Piotr Sperka, Dawid Hajda, Ewa Piętka","doi":"10.1007/s10055-024-01038-4","DOIUrl":"https://doi.org/10.1007/s10055-024-01038-4","url":null,"abstract":"<p>In recent years, we have observed a rise in the popularity of minimally invasive procedures for treating liver tumours, with percutaneous thermoablation being one of them, conducted using image-guided navigation systems with mixed reality technology. However, the application of this method requires adequate training in using the employed system. In our study, we assessed which skills pose the greatest challenges in performing such procedures. The article proposes a training module characterized by an innovative approach: the possibility of practicing the diagnosis, planning, execution stages and the physical possibility of performing the execution stage on the radiological phantom of the abdominal cavity. The proposed approach was evaluated by designing a set of 4 exercises corresponding to the 3 phases mentioned. To the research group included 10 radiologists and 5 residents in the study. Based on 20 clinical cases of liver tumors subjected to percutaneous thermoablation, we developed assessment tasks evaluating four skill categories: head-mounted display (HMD), ultrasound (US)/computed tomography (CT) image fusion interpretation, tracking system use, and the ability to insert a needle<b>.</b> The results were presented using the Likert scale. The results of our study indicate that the most challenging aspect for radiology specialists is adapting to HMD gesture control, while residents point to intraoperative images of fusion and respiratory movements in the liver as the most problematic. In terms of improving the ability to perform procedures on new patients, the module also allows you to create a new hologram for a different clinical case.</p>","PeriodicalId":23727,"journal":{"name":"Virtual Reality","volume":"70 1","pages":""},"PeriodicalIF":4.2,"publicationDate":"2024-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141742471","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-10DOI: 10.1007/s10055-024-01031-x
Aleksandra Zheleva, Lieven De Marez, Durk Talsma, Klaas Bombeke
The advent of virtual reality (VR) technology has necessitated a reevaluation of quality of experience (QoE) models. While numerous recent efforts have been dedicated to creating comprehensive QoE frameworks it seems that the majority of the factors studied as potential influencers of QoE are often limited to single disciplinary viewpoints or specific user-related aspects. Furthermore, the majority of literature reviews in this domain seem to have predominantly focused on academic sources, overlooking industry insights. To address these points, the current research took an interdisciplinary literature review approach to examine QoE literature covering both academic and industry sources from diverse fields (i.e., psychology, ergonomics, user experience, communication science, and engineering). Based on this rich dataset, we created a QoE model that illustrated 252 factors grouped into four branches - user, system, context, and content. The main finding of this review emphasized the substantial gap in the current research landscape, where complex interactions among user, system, context, and content factors in VR are overlooked. The current research not only identified this crucial disparity in existing QoE studies but also provided a substantial online repository of over 200 QoE-related factors. The repository serves as an indispensable tool for future researchers aiming to construct a more holistic understanding of QoE.
{"title":"Intersecting realms: a cross-disciplinary examination of VR quality of experience research","authors":"Aleksandra Zheleva, Lieven De Marez, Durk Talsma, Klaas Bombeke","doi":"10.1007/s10055-024-01031-x","DOIUrl":"https://doi.org/10.1007/s10055-024-01031-x","url":null,"abstract":"<p>The advent of virtual reality (VR) technology has necessitated a reevaluation of quality of experience (QoE) models. While numerous recent efforts have been dedicated to creating comprehensive QoE frameworks it seems that the majority of the factors studied as potential influencers of QoE are often limited to single disciplinary viewpoints or specific user-related aspects. Furthermore, the majority of literature reviews in this domain seem to have predominantly focused on academic sources, overlooking industry insights. To address these points, the current research took an interdisciplinary literature review approach to examine QoE literature covering both academic and industry sources from diverse fields (i.e., psychology, ergonomics, user experience, communication science, and engineering). Based on this rich dataset, we created a QoE model that illustrated 252 factors grouped into four branches - user, system, context, and content. The main finding of this review emphasized the substantial gap in the current research landscape, where complex interactions among user, system, context, and content factors in VR are overlooked. The current research not only identified this crucial disparity in existing QoE studies but also provided a substantial online repository of over 200 QoE-related factors. The repository serves as an indispensable tool for future researchers aiming to construct a more holistic understanding of QoE.</p>","PeriodicalId":23727,"journal":{"name":"Virtual Reality","volume":"36 1","pages":""},"PeriodicalIF":4.2,"publicationDate":"2024-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141586942","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}