Pub Date : 2024-06-25DOI: 10.1007/s10055-024-00994-1
L. Giacomelli, C. Martin Sölch, K. Ledermann
The use of virtual reality (VR) for the management of chronic pain is an intriguing topic. Given the abundance of VR stuies and the numerous opportunities presented by this technology in healthcare, a systematic review that focuses on VR and its applications in chronic pain is necessary to shed light on the various modalities available and their actual effectiveness. This systematic review aims to explore the efficacy of reducing pain and improving pain management through CR interventions for people suffering from chronic pain. Following the PRISMA guidelines, data collection was conducted between December 2020 and February 2021 from the following databases: Cochrane Evidence, JSTOR, Science Direct, PubMed Medline, PubMed NIH, Springer Link, PsychNET, PsychINFO - OVID and PsycARTICLES, Wiley Online Library, Web of Science, ProQuest - MEDLINE®, Sage Journals, NCBI – NLM catalog, Medline OVID, Medline EBSCO, Oxford Handbooks Online, PSYNDEX OVID, Google Scholar. Seventeen articles were included in the qualitative synthesis. Our results highlight that VR interventions, on a global scale, lead to an improvement in pain-related variables, particularly in reducing pain intensity. However, the analyzed articles vary significantly, making them challenging to compare. Future studies could focus on specific types of VR interventions to reduce heterogeneity and conduct a more specific analysis. In conclusion, VR interventions have demonstrated their validity and adaptability as a method for managing chronic pain. Nevertheless, further studies are needed to delve into the various categories of VR interventions in more detail.
{"title":"The effect of virtual reality interventions on reducing pain intensity in chronic pain patients: a systematic review","authors":"L. Giacomelli, C. Martin Sölch, K. Ledermann","doi":"10.1007/s10055-024-00994-1","DOIUrl":"https://doi.org/10.1007/s10055-024-00994-1","url":null,"abstract":"<p>The use of virtual reality (VR) for the management of chronic pain is an intriguing topic. Given the abundance of VR stuies and the numerous opportunities presented by this technology in healthcare, a systematic review that focuses on VR and its applications in chronic pain is necessary to shed light on the various modalities available and their actual effectiveness. This systematic review aims to explore the efficacy of reducing pain and improving pain management through CR interventions for people suffering from chronic pain. Following the PRISMA guidelines, data collection was conducted between December 2020 and February 2021 from the following databases: <i>Cochrane Evidence, JSTOR, Science Direct, PubMed Medline, PubMed NIH, Springer Link, PsychNET, PsychINFO - OVID</i> and <i>PsycARTICLES, Wiley Online Library, Web of Science, ProQuest - MEDLINE®, Sage Journals, NCBI – NLM catalog, Medline OVID, Medline EBSCO, Oxford Handbooks Online, PSYNDEX OVID, Google Scholar.</i> Seventeen articles were included in the qualitative synthesis. Our results highlight that VR interventions, on a global scale, lead to an improvement in pain-related variables, particularly in reducing pain intensity. However, the analyzed articles vary significantly, making them challenging to compare. Future studies could focus on specific types of VR interventions to reduce heterogeneity and conduct a more specific analysis. In conclusion, VR interventions have demonstrated their validity and adaptability as a method for managing chronic pain. Nevertheless, further studies are needed to delve into the various categories of VR interventions in more detail.</p>","PeriodicalId":23727,"journal":{"name":"Virtual Reality","volume":"35 1","pages":""},"PeriodicalIF":4.2,"publicationDate":"2024-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141509039","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-21DOI: 10.1007/s10055-024-01024-w
Sarah Higgins, Stephanie Alcock, Bianca De Aveiro, William Daniels, Harry Farmer, Sahba Besharati
In the wake of the COVID-19 pandemic and the rise of social justice movements, increased attention has been directed to levels of intergroup tension worldwide. Racial prejudice is one such tension that permeates societies and creates distinct inequalities at all levels of our social ecosystem. Whether these prejudices are present explicitly (directly or consciously) or implicitly (unconsciously or automatically), manipulating body ownership by embodying an avatar of another race using immersive virtual reality (IVR) presents a promising approach to reducing racial bias. Nevertheless, research findings are contradictory, which is possibly attributed to variances in methodological factors across studies. This systematic review, therefore, aimed to identify variables and methodological variations that may underlie the observed discrepancies in study outcomes. Adhering to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines, this systematic review encompassed 12 studies that employed IVR and embodiment techniques to investigate racial attitudes. Subsequently, two mini meta-analyses were performed on four and five of these studies, respectively — both of which utilised the Implicit Association Test (IAT) as a metric to gauge these biases. This review demonstrated that IVR allows not only the manipulation of a sense of body ownership but also the investigation of wider social identities. Despite the novelty of IVR as a tool to help understand and possibly reduce racial bias, our review has identified key limitations in the existing literature. Specifically, we found inconsistencies in the measures and IVR equipment and software employed, as well as diversity limitations in demographic characteristics within both the sampled population and the embodiment of avatars. Future studies are needed to address these critical shortcomings. Specific recommendations are suggested, these include: (1) enhancing participant diversity in terms of the sample representation and by integrating ethnically diverse avatars; (2) employing multi-modal methods in assessing embodiment; (3) increasing consistency in the use and administration of implicit and explicit measures of racial prejudice; and (4) implementing consistent approaches in using IVR hardware and software to enhance the realism of the IVR experience.
{"title":"Perspective matters: a systematic review of immersive virtual reality to reduce racial prejudice","authors":"Sarah Higgins, Stephanie Alcock, Bianca De Aveiro, William Daniels, Harry Farmer, Sahba Besharati","doi":"10.1007/s10055-024-01024-w","DOIUrl":"https://doi.org/10.1007/s10055-024-01024-w","url":null,"abstract":"<p>In the wake of the COVID-19 pandemic and the rise of social justice movements, increased attention has been directed to levels of intergroup tension worldwide. Racial prejudice is one such tension that permeates societies and creates distinct inequalities at all levels of our social ecosystem. Whether these prejudices are present explicitly (directly or consciously) or implicitly (unconsciously or automatically), manipulating body ownership by embodying an avatar of another race using immersive virtual reality (IVR) presents a promising approach to reducing racial bias. Nevertheless, research findings are contradictory, which is possibly attributed to variances in methodological factors across studies. This systematic review, therefore, aimed to identify variables and methodological variations that may underlie the observed discrepancies in study outcomes. Adhering to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines, this systematic review encompassed 12 studies that employed IVR and embodiment techniques to investigate racial attitudes. Subsequently, two mini meta-analyses were performed on four and five of these studies, respectively — both of which utilised the Implicit Association Test (IAT) as a metric to gauge these biases. This review demonstrated that IVR allows not only the manipulation of a sense of body ownership but also the investigation of wider social identities. Despite the novelty of IVR as a tool to help understand and possibly reduce racial bias, our review has identified key limitations in the existing literature. Specifically, we found inconsistencies in the measures and IVR equipment and software employed, as well as diversity limitations in demographic characteristics within both the sampled population and the embodiment of avatars. Future studies are needed to address these critical shortcomings. Specific recommendations are suggested, these include: (1) enhancing participant diversity in terms of the sample representation and by integrating ethnically diverse avatars; (2) employing multi-modal methods in assessing embodiment; (3) increasing consistency in the use and administration of implicit and explicit measures of racial prejudice; and (4) implementing consistent approaches in using IVR hardware and software to enhance the realism of the IVR experience.</p>","PeriodicalId":23727,"journal":{"name":"Virtual Reality","volume":"347 1","pages":""},"PeriodicalIF":4.2,"publicationDate":"2024-06-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141509050","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-13DOI: 10.1007/s10055-024-01023-x
Christina-Georgia Serghides, George Christoforides, Nikolas Iakovides, Andreas Aristidou
The rapid technological advancements and the widespread adoption of the internet have diminished the role of the physical library as a main information resource. As the Metaverse is evolving, a revolutionary change is anticipated in how social relationships are perceived, within an educational context. It is therefore necessary for libraries to upgrade the services they provide to keep in line with the technological trends and be a part of this virtual revolution. It is believed that the design and development of a Virtual Reality (VR) library can be the community and knowledge hub the society needs. In this paper, the process of creating a partially digital replica of the Limassol Municipal University Library, a landmark for the city of Limassol, is examined by using photogrammetry and 3D modelling. A 3D platform was developed, where users have the perception that they are experiencing the actual library. To that end, a perceptual study was conducted, to understand the current usage of physical libraries, examine the users’ experience in VR, and identify the requirements and expectations in the development of a virtual library counterpart. Following the suggestions and observations from the perceptual study, five key scenarios were implemented that demonstrate the potential use of a virtual library. This work incorporates the fundamental VR attributes, such as immersiveness, realism, user interactivity and feedback as well as other features, such as animated NPCs, 3D audio, ray-casting and GUIs, that significantly augment the overall VR library user experience, presence as well as navigation autonomy. The main effort of this project was to produce a VR representation of an existing physical library, integrated with its key services, as a proof-of-concept, with emphasis on easy 24/7 access, functionality, and interactivity. The above attributes differentiate this work from existing studies. A detailed user evaluation study was conducted upon completion of the final VR library implementation, which firmly confirmed all its key attributes and future viability.
{"title":"Design and implementation of an interactive virtual library based on its physical counterpart","authors":"Christina-Georgia Serghides, George Christoforides, Nikolas Iakovides, Andreas Aristidou","doi":"10.1007/s10055-024-01023-x","DOIUrl":"https://doi.org/10.1007/s10055-024-01023-x","url":null,"abstract":"<p>The rapid technological advancements and the widespread adoption of the internet have diminished the role of the physical library as a main information resource. As the Metaverse is evolving, a revolutionary change is anticipated in how social relationships are perceived, within an educational context. It is therefore necessary for libraries to upgrade the services they provide to keep in line with the technological trends and be a part of this virtual revolution. It is believed that the design and development of a Virtual Reality (VR) library can be the community and knowledge hub the society needs. In this paper, the process of creating a partially digital replica of the Limassol Municipal University Library, a landmark for the city of Limassol, is examined by using photogrammetry and 3D modelling. A 3D platform was developed, where users have the perception that they are experiencing the actual library. To that end, a perceptual study was conducted, to understand the current usage of physical libraries, examine the users’ experience in VR, and identify the requirements and expectations in the development of a virtual library counterpart. Following the suggestions and observations from the perceptual study, five key scenarios were implemented that demonstrate the potential use of a virtual library. This work incorporates the fundamental VR attributes, such as immersiveness, realism, user interactivity and feedback as well as other features, such as animated NPCs, 3D audio, ray-casting and GUIs, that significantly augment the overall VR library user experience, presence as well as navigation autonomy. The main effort of this project was to produce a VR representation of an existing physical library, integrated with its key services, as a proof-of-concept, with emphasis on easy 24/7 access, functionality, and interactivity. The above attributes differentiate this work from existing studies. A detailed user evaluation study was conducted upon completion of the final VR library implementation, which firmly confirmed all its key attributes and future viability.</p>","PeriodicalId":23727,"journal":{"name":"Virtual Reality","volume":"69 1","pages":""},"PeriodicalIF":4.2,"publicationDate":"2024-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141509051","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-01DOI: 10.1007/s10055-024-01013-z
Shani Kimel Naor, Itay Ketko, Ran Yanovich, Amihai Gottlieb, Yotam Bahat, Oran Ben-Gal, Yuval Heled, Meir Plotnik
Soldiers, athletes, and rescue personnel must often maintain cognitive focus while performing intense, prolonged, and physically demanding activities. The simultaneous activation of cognitive and physical functions can disrupt their performance reciprocally. In the current study, we developed and demonstrated the feasibility of a virtual reality (VR)-based experimental protocol that enables rigorous exploration of the effects of prolonged physical and cognitive efforts. A battery of established neurocognitive tests was used to compare novel cognitive tasks to simulated loaded marches. We simulated a 10-km loaded march in our virtual reality environment, with or without integrated cognitive tasks (VR-COG). During three experimental visits, participants were evaluated pre- and post-activity, including the Color Trail Test (CTT), the Synthetic Work Environment (SYNWIN) battery for assessing multitasking, and physical tests (i.e., time to exhaustion). Results show that Strong or moderate correlations (r ≥ 0.58, p ≤ 0.05) were found between VR-COG scores and scores on the cognitive tests. Both the SYNWIN and CTT showed no condition effects but significant time effects, indicating better performance in the post-activity assessment than in the pre-activity assessment. This novel protocol can contribute to our understanding of physical-cognitive interactions, since virtual environments are ideal for studying high performance professional activity in realistic but controlled settings.
{"title":"Bringing the field into the lab: a novel virtual reality outdoor march simulator for evaluating cognitive and physical performance","authors":"Shani Kimel Naor, Itay Ketko, Ran Yanovich, Amihai Gottlieb, Yotam Bahat, Oran Ben-Gal, Yuval Heled, Meir Plotnik","doi":"10.1007/s10055-024-01013-z","DOIUrl":"https://doi.org/10.1007/s10055-024-01013-z","url":null,"abstract":"<p>Soldiers, athletes, and rescue personnel must often maintain cognitive focus while performing intense, prolonged, and physically demanding activities. The simultaneous activation of cognitive and physical functions can disrupt their performance reciprocally. In the current study, we developed and demonstrated the feasibility of a virtual reality (VR)-based experimental protocol that enables rigorous exploration of the effects of prolonged physical and cognitive efforts. A battery of established neurocognitive tests was used to compare novel cognitive tasks to simulated loaded marches. We simulated a 10-km loaded march in our virtual reality environment, with or without integrated cognitive tasks (VR-COG). During three experimental visits, participants were evaluated pre- and post-activity, including the Color Trail Test (CTT), the Synthetic Work Environment (SYNWIN) battery for assessing multitasking, and physical tests (i.e., time to exhaustion). Results show that Strong or moderate correlations (r ≥ 0.58, <i>p</i> ≤ 0.05) were found between VR-COG scores and scores on the cognitive tests. Both the SYNWIN and CTT showed no condition effects but significant time effects, indicating better performance in the post-activity assessment than in the pre-activity assessment. This novel protocol can contribute to our understanding of physical-cognitive interactions, since virtual environments are ideal for studying high performance professional activity in realistic but controlled settings.</p>","PeriodicalId":23727,"journal":{"name":"Virtual Reality","volume":"15 1","pages":""},"PeriodicalIF":4.2,"publicationDate":"2024-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141192517","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-27DOI: 10.1007/s10055-024-01017-9
Joris Peereboom, Wilbert Tabone, Dimitra Dodou, Joost de Winter
Many collisions between pedestrians and cars are caused by poor visibility, such as occlusion by a parked vehicle. Augmented reality (AR) could help to prevent this problem, but it is unknown to what extent the augmented information needs to be embedded into the world. In this virtual reality experiment with a head-mounted display (HMD), 28 participants were exposed to AR designs, in a scenario where a vehicle approached from behind a parked vehicle. The experimental conditions included a head-locked live video feed of the occluded region, meaning it was fixed in a specific location within the view of the HMD (VideoHead), a world-locked video feed displayed across the street (VideoStreet), and two conformal diminished reality designs: a see-through display on the occluding vehicle (VideoSeeThrough) and a solution where the occluding vehicle has been made semi-transparent (TransparentVehicle). A Baseline condition without augmented information served as a reference. Additionally, the VideoHead and VideoStreet conditions were each tested with and without the addition of a guiding arrow indicating the location of the approaching vehicle. Participants performed 42 trials, 6 per condition, during which they had to hold a key when they felt safe to cross. The keypress percentages and responses from additional questionnaires showed that the diminished-reality TransparentVehicle and VideoSeeThrough designs came out most favourably, while the VideoHead solution caused some discomfort and dissatisfaction. An analysis of head yaw angle showed that VideoHead and VideoStreet caused divided attention between the screen and the approaching vehicle. The use of guiding arrows did not contribute demonstrable added value. AR designs with a high level of local embeddedness are beneficial for addressing occlusion problems when crossing. However, the head-locked solutions should not be immediately dismissed because, according to the literature, such solutions can serve tasks where a salient warning or instruction is beneficial.
{"title":"Head-locked, world-locked, or conformal diminished-reality? An examination of different AR solutions for pedestrian safety in occluded scenarios","authors":"Joris Peereboom, Wilbert Tabone, Dimitra Dodou, Joost de Winter","doi":"10.1007/s10055-024-01017-9","DOIUrl":"https://doi.org/10.1007/s10055-024-01017-9","url":null,"abstract":"<p>Many collisions between pedestrians and cars are caused by poor visibility, such as occlusion by a parked vehicle. Augmented reality (AR) could help to prevent this problem, but it is unknown to what extent the augmented information needs to be embedded into the world. In this virtual reality experiment with a head-mounted display (HMD), 28 participants were exposed to AR designs, in a scenario where a vehicle approached from behind a parked vehicle. The experimental conditions included a head-locked live video feed of the occluded region, meaning it was fixed in a specific location within the view of the HMD (<i>VideoHead</i>), a world-locked video feed displayed across the street (<i>VideoStreet</i>), and two conformal diminished reality designs: a see-through display on the occluding vehicle (<i>VideoSeeThrough</i>) and a solution where the occluding vehicle has been made semi-transparent (<i>TransparentVehicle</i>). A <i>Baseline</i> condition without augmented information served as a reference. Additionally, the <i>VideoHead</i> and <i>VideoStreet</i> conditions were each tested with and without the addition of a guiding arrow indicating the location of the approaching vehicle. Participants performed 42 trials, 6 per condition, during which they had to hold a key when they felt safe to cross. The keypress percentages and responses from additional questionnaires showed that the diminished-reality <i>TransparentVehicle</i> and <i>VideoSeeThrough</i> designs came out most favourably, while the <i>VideoHead</i> solution caused some discomfort and dissatisfaction. An analysis of head yaw angle showed that <i>VideoHead</i> and <i>VideoStreet</i> caused divided attention between the screen and the approaching vehicle. The use of guiding arrows did not contribute demonstrable added value. AR designs with a high level of local embeddedness are beneficial for addressing occlusion problems when crossing. However, the head-locked solutions should not be immediately dismissed because, according to the literature, such solutions can serve tasks where a salient warning or instruction is beneficial.</p>","PeriodicalId":23727,"journal":{"name":"Virtual Reality","volume":"7 1","pages":""},"PeriodicalIF":4.2,"publicationDate":"2024-05-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141172953","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-25DOI: 10.1007/s10055-024-01014-y
Matevž Pesek, Nejc Hirci, Klara Žnideršič, Matija Marolt
This study analyzes the effect of using a virtual reality (VR) game as a complementary tool to improve users’ rhythmic performance and perception in a remote and self-learning environment. In recent years, remote learning has gained importance due to various everyday situations; however, the effects of using VR in such situations for individual and self-learning have yet to be evaluated. In music education, learning processes are usually heavily dependent on face-to-face communication with a teacher and are based on a formal or informal curriculum. The aim of this study is to investigate the potential of gamified VR learning and its influence on users’ rhythmic sensory and perceptual abilities. We developed a drum-playing game based on a tower defense scenario designed to improve four aspects of rhythmic perceptual skills in elementary school children with various levels of music learning experience. In this study, 14 elementary school children received Meta Quest 2 headsets for individual use in a 14-day individual training session. The results showed a significant increase in their rhythmical skills through an analysis of their rhythmic performance before and after the training sessions. In addition, the experience of playing the VR game and using the HMD setup was also assessed, highlighting some of the challenges of currently available affordable headsets for gamified learning scenarios.
{"title":"Enhancing music rhythmic perception and performance with a VR game","authors":"Matevž Pesek, Nejc Hirci, Klara Žnideršič, Matija Marolt","doi":"10.1007/s10055-024-01014-y","DOIUrl":"https://doi.org/10.1007/s10055-024-01014-y","url":null,"abstract":"<p>This study analyzes the effect of using a virtual reality (VR) game as a complementary tool to improve users’ rhythmic performance and perception in a remote and self-learning environment. In recent years, remote learning has gained importance due to various everyday situations; however, the effects of using VR in such situations for individual and self-learning have yet to be evaluated. In music education, learning processes are usually heavily dependent on face-to-face communication with a teacher and are based on a formal or informal curriculum. The aim of this study is to investigate the potential of gamified VR learning and its influence on users’ rhythmic sensory and perceptual abilities. We developed a drum-playing game based on a tower defense scenario designed to improve four aspects of rhythmic perceptual skills in elementary school children with various levels of music learning experience. In this study, 14 elementary school children received Meta Quest 2 headsets for individual use in a 14-day individual training session. The results showed a significant increase in their rhythmical skills through an analysis of their rhythmic performance before and after the training sessions. In addition, the experience of playing the VR game and using the HMD setup was also assessed, highlighting some of the challenges of currently available affordable headsets for gamified learning scenarios.</p>","PeriodicalId":23727,"journal":{"name":"Virtual Reality","volume":"58 1","pages":""},"PeriodicalIF":4.2,"publicationDate":"2024-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141145743","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-25DOI: 10.1007/s10055-024-01010-2
D. A. Pérez-Ferrara, G. Y. Flores-Medina, E. Landa-Ramírez, D. J. González-Sánchez, J. A. Luna-Padilla, A. L. Sosa-Millán, A. Mondragón-Maya
To date, many interventions for social cognition have been developed. Nevertheless, the use of social cognition training with virtual reality (SCT-VR) in schizophrenia is a recent field of study. Therefore, a scoping review is a suitable method to examine the extent of existing literature, the characteristics of the studies, and the SCT-VR. Additionally, it allows us to summarize findings from a heterogeneous body of knowledge and identify gaps in the literature favoring the planning and conduct of future research. The aim of this review was to explore and describe the characteristics of SCT-VR in schizophrenia. The searched databases were MEDLINE, PsycInfo, Web of Science, and CINAHL. This scoping review considered experimental, quasi-experimental, analytical observational and descriptive observational study designs. The full text of selected citations was assessed by two independent reviewers. Data were extracted from papers included in the scoping review by two independent reviewers. We identified 1,407 records. A total of twelve studies were included for analyses. Study designs were variable, most research was proof-of-concept or pilot studies. Most SCT-VR were immersive and targeted interventions. Number of sessions ranged from 9 to 16, and the duration of each session ranged from 45 to 120 min. Some studies reported a significant improvement in emotion recognition and/or theory of mind. However, SCT-VR is a recent research field in which the heterogeneity in methodological approaches is evident and has prevented the reaching of robust conclusions. Preliminary evidence has shown that SCT-VR could represent a feasible and promising approach for improving SC deficits in schizophrenia.
{"title":"Social cognition training using virtual reality for people with schizophrenia: a scoping review","authors":"D. A. Pérez-Ferrara, G. Y. Flores-Medina, E. Landa-Ramírez, D. J. González-Sánchez, J. A. Luna-Padilla, A. L. Sosa-Millán, A. Mondragón-Maya","doi":"10.1007/s10055-024-01010-2","DOIUrl":"https://doi.org/10.1007/s10055-024-01010-2","url":null,"abstract":"<p>To date, many interventions for social cognition have been developed. Nevertheless, the use of social cognition training with virtual reality (SCT-VR) in schizophrenia is a recent field of study. Therefore, a scoping review is a suitable method to examine the extent of existing literature, the characteristics of the studies, and the SCT-VR. Additionally, it allows us to summarize findings from a heterogeneous body of knowledge and identify gaps in the literature favoring the planning and conduct of future research. The aim of this review was to explore and describe the characteristics of SCT-VR in schizophrenia. The searched databases were MEDLINE, PsycInfo, Web of Science, and CINAHL. This scoping review considered experimental, quasi-experimental, analytical observational and descriptive observational study designs. The full text of selected citations was assessed by two independent reviewers. Data were extracted from papers included in the scoping review by two independent reviewers. We identified 1,407 records. A total of twelve studies were included for analyses. Study designs were variable, most research was proof-of-concept or pilot studies. Most SCT-VR were immersive and targeted interventions. Number of sessions ranged from 9 to 16, and the duration of each session ranged from 45 to 120 min. Some studies reported a significant improvement in emotion recognition and/or theory of mind. However, SCT-VR is a recent research field in which the heterogeneity in methodological approaches is evident and has prevented the reaching of robust conclusions. Preliminary evidence has shown that SCT-VR could represent a feasible and promising approach for improving SC deficits in schizophrenia.</p>","PeriodicalId":23727,"journal":{"name":"Virtual Reality","volume":"25 1","pages":""},"PeriodicalIF":4.2,"publicationDate":"2024-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141145869","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper introduces a methodology tailored to capture, post-process, and replicate audio-visual data of outdoor environments (urban or natural) for VR experiments carried out within a controlled laboratory environment. The methodology consists of 360(^circ) video and higher order ambisonic (HOA) field recordings and subsequent calibrated spatial sound reproduction with a spherical loudspeaker array and video played back via a head-mounted display using a game engine and a graphical user interface for a perceptual experimental questionnaire. Attention was given to the equalisation and calibration of the ambisonic microphone and to the design of different ambisonic decoders. A listening experiment was conducted to evaluate four different decoders (one 2D first-order ambisonic decoder and three 3D third-order decoders) by asking participants to rate the relative (perceived) realism of recorded outdoor soundscapes reproduced with these decoders. The results showed that the third-order decoders were ranked as more realistic.
{"title":"Replicating outdoor environments using VR and ambisonics: a methodology for accurate audio-visual recording, processing and reproduction","authors":"Fotis Georgiou, Claudia Kawai, Beat Schäffer, Reto Pieren","doi":"10.1007/s10055-024-01003-1","DOIUrl":"https://doi.org/10.1007/s10055-024-01003-1","url":null,"abstract":"<p>This paper introduces a methodology tailored to capture, post-process, and replicate audio-visual data of outdoor environments (urban or natural) for VR experiments carried out within a controlled laboratory environment. The methodology consists of 360<span>(^circ)</span> video and higher order ambisonic (HOA) field recordings and subsequent calibrated spatial sound reproduction with a spherical loudspeaker array and video played back via a head-mounted display using a game engine and a graphical user interface for a perceptual experimental questionnaire. Attention was given to the equalisation and calibration of the ambisonic microphone and to the design of different ambisonic decoders. A listening experiment was conducted to evaluate four different decoders (one 2D first-order ambisonic decoder and three 3D third-order decoders) by asking participants to rate the relative (perceived) realism of recorded outdoor soundscapes reproduced with these decoders. The results showed that the third-order decoders were ranked as more realistic.</p>","PeriodicalId":23727,"journal":{"name":"Virtual Reality","volume":"4 1","pages":""},"PeriodicalIF":4.2,"publicationDate":"2024-05-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141059830","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-15DOI: 10.1007/s10055-024-01007-x
Sara Vlahovic, Lea Skorin-Kapov, Mirko Suznjevic, Nina Pavlin-Bernardic
Uncomfortable sensations that arise during virtual reality (VR) use have always been among the industry’s biggest challenges. While certain VR-induced effects, such as cybersickness, have garnered a lot of interest from academia and industry over the years, others have been overlooked and underresearched. Recently, the research community has been calling for more holistic approaches to studying the issue of VR discomfort. Focusing on active VR gaming, our article presents the results of two user studies with a total of 40 participants. Incorporating state-of-the-art VR-specific measures (the Simulation Task Load Index—SIM-TLX, Cybersickness Questionnaire—CSQ, Virtual Reality Sickness Questionnaire—VRSQ) into our methodology, we examined workload, musculoskeletal discomfort, device-related discomfort, cybersickness, and changes in reaction time following VR gameplay. Using a set of six different active VR games (three per study), we attempted to quantify and compare the prevalence and intensity of VR-induced symptoms across different genres and game mechanics. Varying between individuals, as well as games, the diverse symptoms reported in our study highlight the importance of including measures of VR-induced effects other than cybersickness into VR gaming user studies, while questioning the suitability of the Simulator Sickness Questionnaire (SSQ)—arguably the most prevalent measure of VR discomfort in the field—for use with active VR gaming scenarios.
{"title":"Not just cybersickness: short-term effects of popular VR game mechanics on physical discomfort and reaction time","authors":"Sara Vlahovic, Lea Skorin-Kapov, Mirko Suznjevic, Nina Pavlin-Bernardic","doi":"10.1007/s10055-024-01007-x","DOIUrl":"https://doi.org/10.1007/s10055-024-01007-x","url":null,"abstract":"<p>Uncomfortable sensations that arise during virtual reality (VR) use have always been among the industry’s biggest challenges. While certain VR-induced effects, such as cybersickness, have garnered a lot of interest from academia and industry over the years, others have been overlooked and underresearched. Recently, the research community has been calling for more holistic approaches to studying the issue of VR discomfort. Focusing on active VR gaming, our article presents the results of two user studies with a total of 40 participants. Incorporating state-of-the-art VR-specific measures (the Simulation Task Load Index—SIM-TLX, Cybersickness Questionnaire—CSQ, Virtual Reality Sickness Questionnaire—VRSQ) into our methodology, we examined workload, musculoskeletal discomfort, device-related discomfort, cybersickness, and changes in reaction time following VR gameplay. Using a set of six different active VR games (three per study), we attempted to quantify and compare the prevalence and intensity of VR-induced symptoms across different genres and game mechanics. Varying between individuals, as well as games, the diverse symptoms reported in our study highlight the importance of including measures of VR-induced effects other than cybersickness into VR gaming user studies, while questioning the suitability of the Simulator Sickness Questionnaire (SSQ)—arguably the most prevalent measure of VR discomfort in the field—for use with active VR gaming scenarios.</p>","PeriodicalId":23727,"journal":{"name":"Virtual Reality","volume":"304 1","pages":""},"PeriodicalIF":4.2,"publicationDate":"2024-05-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140939691","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-06DOI: 10.1007/s10055-024-00989-y
Rodrigo Lima, Alice Chirico, Rui Varandas, Hugo Gamboa, Andrea Gaggioli, Sergi Bermúdez i Badia
Affective computing has been widely used to detect and recognize emotional states. The main goal of this study was to detect emotional states using machine learning algorithms automatically. The experimental procedure involved eliciting emotional states using film clips in an immersive and non-immersive virtual reality setup. The participants’ physiological signals were recorded and analyzed to train machine learning models to recognize users’ emotional states. Furthermore, two subjective ratings emotional scales were provided to rate each emotional film clip. Results showed no significant differences between presenting the stimuli in the two degrees of immersion. Regarding emotion classification, it emerged that for both physiological signals and subjective ratings, user-dependent models have a better performance when compared to user-independent models. We obtained an average accuracy of 69.29 ± 11.41% and 71.00 ± 7.95% for the subjective ratings and physiological signals, respectively. On the other hand, using user-independent models, the accuracy we obtained was 54.0 ± 17.2% and 24.9 ± 4.0%, respectively. We interpreted these data as the result of high inter-subject variability among participants, suggesting the need for user-dependent classification models. In future works, we intend to develop new classification algorithms and transfer them to real-time implementation. This will make it possible to adapt to a virtual reality environment in real-time, according to the user’s emotional state.
{"title":"Multimodal emotion classification using machine learning in immersive and non-immersive virtual reality","authors":"Rodrigo Lima, Alice Chirico, Rui Varandas, Hugo Gamboa, Andrea Gaggioli, Sergi Bermúdez i Badia","doi":"10.1007/s10055-024-00989-y","DOIUrl":"https://doi.org/10.1007/s10055-024-00989-y","url":null,"abstract":"<p>Affective computing has been widely used to detect and recognize emotional states. The main goal of this study was to detect emotional states using machine learning algorithms automatically. The experimental procedure involved eliciting emotional states using film clips in an immersive and non-immersive virtual reality setup. The participants’ physiological signals were recorded and analyzed to train machine learning models to recognize users’ emotional states. Furthermore, two subjective ratings emotional scales were provided to rate each emotional film clip. Results showed no significant differences between presenting the stimuli in the two degrees of immersion. Regarding emotion classification, it emerged that for both physiological signals and subjective ratings, user-dependent models have a better performance when compared to user-independent models. We obtained an average accuracy of 69.29 ± 11.41% and 71.00 ± 7.95% for the subjective ratings and physiological signals, respectively. On the other hand, using user-independent models, the accuracy we obtained was 54.0 ± 17.2% and 24.9 ± 4.0%, respectively. We interpreted these data as the result of high inter-subject variability among participants, suggesting the need for user-dependent classification models. In future works, we intend to develop new classification algorithms and transfer them to real-time implementation. This will make it possible to adapt to a virtual reality environment in real-time, according to the user’s emotional state.</p>","PeriodicalId":23727,"journal":{"name":"Virtual Reality","volume":"107 1","pages":""},"PeriodicalIF":4.2,"publicationDate":"2024-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140882288","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}