Pub Date : 2024-03-15DOI: 10.1007/s10055-024-00978-1
João Pedro Mucheroni Covolan, Claiton Oliveira, Silvio Ricardo Rodrigues Sanches, Antonio Carlos Sementille
An Augmented Reality (AR) system must show real and virtual elements as if they coexisted in the same environment. The tridimensional aligment (registration) is particularly challenging on specific hardware configurations such as Head Mounted Displays (HMDs) that use Optical See-Through (OST) technology. In general, the calibration of HMDs uses deterministic optimization methods. However, non-deterministic methods have been proposed in the literature with promising results in distinct research areas. In this work, we developed a non-deterministic optimization method for the semi-automatic calibration of smartphone-based OST HMDs. We tested simulated annealing, evolutionary strategy, and particle swarm algorithms. We also developed a system for calibration and evaluated it through an application that aligned a virtual object in an AR environment. We evaluated our method using the Mean Squared Error (MSE) at each calibration step, considering the difference between the ideal/observed positions of a set of reference points and those estimated from the values determined for the calibration parameters. Our results show an accurate OST HMD calibration for the peripersonal space, with similar MSEs for the three tested algorithms.
增强现实(AR)系统必须显示真实和虚拟元素,就像它们共存于同一环境中一样。三维对齐(注册)在特定的硬件配置上尤其具有挑战性,例如使用光学透视(OST)技术的头戴式显示器(HMD)。一般来说,HMD 的校准采用确定性优化方法。不过,文献中也提出了非确定性方法,并在不同的研究领域取得了可喜的成果。在这项工作中,我们为基于智能手机的 OST HMD 的半自动校准开发了一种非确定性优化方法。我们测试了模拟退火、进化策略和粒子群算法。我们还开发了一个校准系统,并通过一个在 AR 环境中对齐虚拟物体的应用程序对其进行了评估。我们使用每个校准步骤的平均平方误差(MSE)对我们的方法进行了评估,考虑了一组参考点的理想/观测位置与根据校准参数确定的值估算的位置之间的差异。我们的结果表明,OST HMD 可对周身空间进行精确校准,三种测试算法的均方误差相近。
{"title":"Non-deterministic method for semi-automatic calibration of smartphone-based OST HMDs","authors":"João Pedro Mucheroni Covolan, Claiton Oliveira, Silvio Ricardo Rodrigues Sanches, Antonio Carlos Sementille","doi":"10.1007/s10055-024-00978-1","DOIUrl":"https://doi.org/10.1007/s10055-024-00978-1","url":null,"abstract":"<p>An Augmented Reality (AR) system must show real and virtual elements as if they coexisted in the same environment. The tridimensional aligment (registration) is particularly challenging on specific hardware configurations such as Head Mounted Displays (HMDs) that use Optical See-Through (OST) technology. In general, the calibration of HMDs uses deterministic optimization methods. However, non-deterministic methods have been proposed in the literature with promising results in distinct research areas. In this work, we developed a non-deterministic optimization method for the semi-automatic calibration of smartphone-based OST HMDs. We tested simulated annealing, evolutionary strategy, and particle swarm algorithms. We also developed a system for calibration and evaluated it through an application that aligned a virtual object in an AR environment. We evaluated our method using the Mean Squared Error (MSE) at each calibration step, considering the difference between the ideal/observed positions of a set of reference points and those estimated from the values determined for the calibration parameters. Our results show an accurate OST HMD calibration for the peripersonal space, with similar MSEs for the three tested algorithms.\u0000</p>","PeriodicalId":23727,"journal":{"name":"Virtual Reality","volume":"72 1","pages":""},"PeriodicalIF":4.2,"publicationDate":"2024-03-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140147331","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The illusory experience of self-motion known as vection, is a multisensory phenomenon relevant to self-motion processes. While some studies have shown that neck muscle vibrations can improve self-motion parameter estimation, the influence on vection remains unknown. Further, few studies measured cybersickness (CS), presence, and vection concurrently and have shown conflicting results. It was hypothesized that 1) neck vibrations would enhance vection and presence, and 2) CS to negatively relate to presence and vection, whereas presence and vection to positively relate to one another. Thirty-two participants were visually and audibly immersed in a virtual reality flight simulator and occasional neck muscle vibrations were presented. Vection onset and duration were reported through button presses. Turning angle estimations and ratings of vection quality, presence, and CS were obtained after completion of the flights. Results showed no influence of vibrations on turning angle estimation errors, but a medium positive effect of vibrations on vection quality was found. Presence and vection quality were positively related, and no strong association between CS and presence or vection was found. It is concluded that neck vibrations may enhance vection and presence, however, from the current study it is unclear whether this is due to proprioceptive or tactile stimulation.
{"title":"Investigating the influence of neck muscle vibration on illusory self-motion in virtual reality","authors":"Lars Kooijman, Houshyar Asadi, Camilo Gonzalez Arango, Shady Mohamed, Saeid Nahavandi","doi":"10.1007/s10055-024-00951-y","DOIUrl":"https://doi.org/10.1007/s10055-024-00951-y","url":null,"abstract":"<p>The illusory experience of self-motion known as vection, is a multisensory phenomenon relevant to self-motion processes. While some studies have shown that neck muscle vibrations can improve self-motion parameter estimation, the influence on vection remains unknown. Further, few studies measured cybersickness (CS), presence, and vection concurrently and have shown conflicting results. It was hypothesized that 1) neck vibrations would enhance vection and presence, and 2) CS to negatively relate to presence and vection, whereas presence and vection to positively relate to one another. Thirty-two participants were visually and audibly immersed in a virtual reality flight simulator and occasional neck muscle vibrations were presented. Vection onset and duration were reported through button presses. Turning angle estimations and ratings of vection quality, presence, and CS were obtained after completion of the flights. Results showed no influence of vibrations on turning angle estimation errors, but a medium positive effect of vibrations on vection quality was found. Presence and vection quality were positively related, and no strong association between CS and presence or vection was found. It is concluded that neck vibrations may enhance vection and presence, however, from the current study it is unclear whether this is due to proprioceptive or tactile stimulation.</p>","PeriodicalId":23727,"journal":{"name":"Virtual Reality","volume":"18 1","pages":""},"PeriodicalIF":4.2,"publicationDate":"2024-03-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140147608","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-08DOI: 10.1007/s10055-024-00948-7
Mahdiyeh Sadat Moosavi, Pierre Raimbaud, Christophe Guillet, Frédéric Mérienne
This study investigates weight perception in virtual reality without kinesthetic feedback from the real world, by means of an illusory method called pseudo-haptic. This illusory model focuses on the dissociation of visual input and somatosensory feedback and tries to induce the sensation of virtual objects' loads in VR users by manipulating visual input. For that, modifications on the control-display ratio, i.e., between the real and virtual motions of the arm, can be used to produce a visual illusionary effect on the virtual objects' positions as well. Therefore, VR users perceive it as velocity variations in the objects' displacements, helping them achieve a better sensation of virtual weight. A primary contribution of this paper is the development of a novel, holistic assessment methodology that measures the sense of the presence in virtual reality contexts, particularly when participants are lifting virtual objects and experiencing their weight. Our study examined the effect of virtual object weight on the kinematic parameters and velocity profiles of participants' upward arm motions, along with a parallel experiment conducted using real weights. By comparing the lifting of real objects with that of virtual objects, it was possible to gain insight into the variations in kinematic features observed in participants' arm motions. Additionally, subjective measurements, utilizing the Borg CR10 questionnaire, were conducted to assess participants' perceptions of hand fatigue. The analysis of collected data, encompassing both subjective and objective measurements, concluded that participants experienced similar sensations of fatigue and changes in hand kinematics during both virtual object tasks, resulting from pseudo-haptic feedback, and real weight lifting tasks. This consistency in findings underscores the efficacy of pseudo-haptic feedback in simulating realistic weight sensations in virtual environments.
{"title":"Enhancing weight perception in virtual reality: an analysis of kinematic features","authors":"Mahdiyeh Sadat Moosavi, Pierre Raimbaud, Christophe Guillet, Frédéric Mérienne","doi":"10.1007/s10055-024-00948-7","DOIUrl":"https://doi.org/10.1007/s10055-024-00948-7","url":null,"abstract":"<p>This study investigates weight perception in virtual reality without kinesthetic feedback from the real world, by means of an illusory method called pseudo-haptic. This illusory model focuses on the dissociation of visual input and somatosensory feedback and tries to induce the sensation of virtual objects' loads in VR users by manipulating visual input. For that, modifications on the control-display ratio, i.e., between the real and virtual motions of the arm, can be used to produce a visual illusionary effect on the virtual objects' positions as well. Therefore, VR users perceive it as velocity variations in the objects' displacements, helping them achieve a better sensation of virtual weight. A primary contribution of this paper is the development of a novel, holistic assessment methodology that measures the sense of the presence in virtual reality contexts, particularly when participants are lifting virtual objects and experiencing their weight. Our study examined the effect of virtual object weight on the kinematic parameters and velocity profiles of participants' upward arm motions, along with a parallel experiment conducted using real weights. By comparing the lifting of real objects with that of virtual objects, it was possible to gain insight into the variations in kinematic features observed in participants' arm motions. Additionally, subjective measurements, utilizing the Borg CR10 questionnaire, were conducted to assess participants' perceptions of hand fatigue. The analysis of collected data, encompassing both subjective and objective measurements, concluded that participants experienced similar sensations of fatigue and changes in hand kinematics during both virtual object tasks, resulting from pseudo-haptic feedback, and real weight lifting tasks. This consistency in findings underscores the efficacy of pseudo-haptic feedback in simulating realistic weight sensations in virtual environments.</p>","PeriodicalId":23727,"journal":{"name":"Virtual Reality","volume":"4 1","pages":""},"PeriodicalIF":4.2,"publicationDate":"2024-03-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140073525","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-08DOI: 10.1007/s10055-024-00956-7
Triton Ong, Julia Ivanova, Hiral Soni, Hattie Wilczewski, Janelle Barrera, Mollie Cummins, Brandon M. Welch, Brian E. Bunnell
Virtual reality (VR) can enhance mental health care. In particular, the effectiveness of VR-based exposure therapy (VRET) has been well-demonstrated for treatment of anxiety disorders. However, most applications of VRET remain localized to clinic spaces. We aimed to explore mental health therapists’ perceptions of telehealth-based VRET (tele-VRET) by conducting semi-structured, qualitative interviews with 18 telemental health therapists between October and December 2022. Interview topics included telehealth experiences, exposure therapy over telehealth, previous experiences with VR, and perspectives on tele-VRET. Therapists described how telehealth reduced barriers (88.9%, 16/18), enhanced therapy (61.1%, 11/18), and improved access to clients (38.9%, 7/18), but entailed problems with technology (61.1%, 11/18), uncontrolled settings (55.6%, 10/18), and communication difficulties (50%, 9/18). Therapists adapted exposure therapy to telehealth by using online resources (66.7%, 12/18), preparing client expectations (55.6%, 10/18), and adjusting workflows (27.8%, 5/18). Most therapists had used VR before (72.2%, 13/18) and had positive impressions of VR (55.6%, 10/18), but none had used VR clinically. In response to tele-VRET, therapists requested interactive session activities (77.8%, 14/18) and customizable interventions components (55.6%, 10/18). Concerns about tele-VRET included risks with certain clients (77.8%, 14/18), costs (50%, 9/18), side effects and privacy (22.2%, 4/18), and inappropriateness for specific forms of exposure therapy (16.7%, 3/18). These results reveal how combining telehealth and VRET may expand therapeutic options for mental healthcare providers and can help inform collaborative development of immersive health technologies.
{"title":"Therapist perspectives on telehealth-based virtual reality exposure therapy","authors":"Triton Ong, Julia Ivanova, Hiral Soni, Hattie Wilczewski, Janelle Barrera, Mollie Cummins, Brandon M. Welch, Brian E. Bunnell","doi":"10.1007/s10055-024-00956-7","DOIUrl":"https://doi.org/10.1007/s10055-024-00956-7","url":null,"abstract":"<p>Virtual reality (VR) can enhance mental health care. In particular, the effectiveness of VR-based exposure therapy (VRET) has been well-demonstrated for treatment of anxiety disorders. However, most applications of VRET remain localized to clinic spaces. We aimed to explore mental health therapists’ perceptions of telehealth-based VRET (tele-VRET) by conducting semi-structured, qualitative interviews with 18 telemental health therapists between October and December 2022. Interview topics included telehealth experiences, exposure therapy over telehealth, previous experiences with VR, and perspectives on tele-VRET. Therapists described how telehealth reduced barriers (88.9%, 16/18), enhanced therapy (61.1%, 11/18), and improved access to clients (38.9%, 7/18), but entailed problems with technology (61.1%, 11/18), uncontrolled settings (55.6%, 10/18), and communication difficulties (50%, 9/18). Therapists adapted exposure therapy to telehealth by using online resources (66.7%, 12/18), preparing client expectations (55.6%, 10/18), and adjusting workflows (27.8%, 5/18). Most therapists had used VR before (72.2%, 13/18) and had positive impressions of VR (55.6%, 10/18), but none had used VR clinically. In response to tele-VRET, therapists requested interactive session activities (77.8%, 14/18) and customizable interventions components (55.6%, 10/18). Concerns about tele-VRET included risks with certain clients (77.8%, 14/18), costs (50%, 9/18), side effects and privacy (22.2%, 4/18), and inappropriateness for specific forms of exposure therapy (16.7%, 3/18). These results reveal how combining telehealth and VRET may expand therapeutic options for mental healthcare providers and can help inform collaborative development of immersive health technologies.</p>","PeriodicalId":23727,"journal":{"name":"Virtual Reality","volume":"43 1","pages":""},"PeriodicalIF":4.2,"publicationDate":"2024-03-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140073641","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-08DOI: 10.1007/s10055-024-00970-9
Henar Guillen-Sanz, David Checa, Ines Miguel-Alonso, Andres Bustillo
Wearable biosensors are increasingly incorporated in immersive Virtual Reality (iVR) applications. A trend that is attributed to the availability of better quality, less costly, and easier-to-use devices. However, consensus is yet to emerge over the most optimal combinations. In this review, the aim is to clarify the best examples of biosensor usage in combination with iVR applications. The high number of papers in the review (560) were classified into the following seven fields of application: psychology, medicine, sports, education, ergonomics, military, and tourism and marketing. The use of each type of wearable biosensor and Head-Mounted Display was analyzed for each field of application. Then, the development of the iVR application is analyzed according to its goals, user interaction levels, and the possibility of adapting the iVR environment to biosensor feedback. Finally, the evaluation of the iVR experience was studied, considering such issues as sample size, the presence of a control group, and post-assessment routines. A working method through which the most common solutions, the best practices, and the most promising trends in biofeedback-based iVR applications were identified for each field of application. Besides, guidelines oriented towards good practice are proposed for the development of future iVR with biofeedback applications. The results of this review suggest that the use of biosensors within iVR environments need to be standardized in some fields of application, especially when considering the adaptation of the iVR experience to real-time biosignals to improve user performance.
{"title":"A systematic review of wearable biosensor usage in immersive virtual reality experiences","authors":"Henar Guillen-Sanz, David Checa, Ines Miguel-Alonso, Andres Bustillo","doi":"10.1007/s10055-024-00970-9","DOIUrl":"https://doi.org/10.1007/s10055-024-00970-9","url":null,"abstract":"<p>Wearable biosensors are increasingly incorporated in immersive Virtual Reality (iVR) applications. A trend that is attributed to the availability of better quality, less costly, and easier-to-use devices. However, consensus is yet to emerge over the most optimal combinations. In this review, the aim is to clarify the best examples of biosensor usage in combination with iVR applications. The high number of papers in the review (560) were classified into the following seven fields of application: psychology, medicine, sports, education, ergonomics, military, and tourism and marketing. The use of each type of wearable biosensor and Head-Mounted Display was analyzed for each field of application. Then, the development of the iVR application is analyzed according to its goals, user interaction levels, and the possibility of adapting the iVR environment to biosensor feedback. Finally, the evaluation of the iVR experience was studied, considering such issues as sample size, the presence of a control group, and post-assessment routines. A working method through which the most common solutions, the best practices, and the most promising trends in biofeedback-based iVR applications were identified for each field of application. Besides, guidelines oriented towards good practice are proposed for the development of future iVR with biofeedback applications. The results of this review suggest that the use of biosensors within iVR environments need to be standardized in some fields of application, especially when considering the adaptation of the iVR experience to real-time biosignals to improve user performance.</p>","PeriodicalId":23727,"journal":{"name":"Virtual Reality","volume":"4 1","pages":""},"PeriodicalIF":4.2,"publicationDate":"2024-03-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140073650","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-08DOI: 10.1007/s10055-024-00942-z
Qing Gong, Ning Zou, Wenjing Yang, Qi Zheng, Pengrui Chen
A virtual reality (VR) based cultural heritage exhibition (VRCHE) is an important type of VR-based museum exhibition. The user experience (UX) design of VRCHE has encountered opportunities and due to the differences in human–computer interaction between VR-based and conventional interaction interfaces, so proposing the UX model of VRCHE is crucial. Although there are some existing works that study the UX models of VRCHEs, they are not complete enough to describe the UX of VRCHEs or offer any design strategies due to the methodologies and experimental materials that they currently use. This study creates experiments utilizing grounded theory that combine qualitative and quantitative approaches. Then, the study synthesizes three-level coding and quantitative analysis findings from grounded theory, builds a detailed model of the VRCHE UX using theoretical coding, and proposes design strategies.
{"title":"User experience model and design strategies for virtual reality-based cultural heritage exhibition","authors":"Qing Gong, Ning Zou, Wenjing Yang, Qi Zheng, Pengrui Chen","doi":"10.1007/s10055-024-00942-z","DOIUrl":"https://doi.org/10.1007/s10055-024-00942-z","url":null,"abstract":"<p>A virtual reality (VR) based cultural heritage exhibition (VRCHE) is an important type of VR-based museum exhibition. The user experience (UX) design of VRCHE has encountered opportunities and due to the differences in human–computer interaction between VR-based and conventional interaction interfaces, so proposing the UX model of VRCHE is crucial. Although there are some existing works that study the UX models of VRCHEs, they are not complete enough to describe the UX of VRCHEs or offer any design strategies due to the methodologies and experimental materials that they currently use. This study creates experiments utilizing grounded theory that combine qualitative and quantitative approaches. Then, the study synthesizes three-level coding and quantitative analysis findings from grounded theory, builds a detailed model of the VRCHE UX using theoretical coding, and proposes design strategies.</p>","PeriodicalId":23727,"journal":{"name":"Virtual Reality","volume":"135 1","pages":""},"PeriodicalIF":4.2,"publicationDate":"2024-03-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140073626","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper introduces a Virtual Reality (VR) application tailored for fashion designers and retailers, transcending traditional garment design and demonstration boundaries by presenting an immersive digital garment showcase within a captivating VR environment. Simulating a virtual retail store, designers navigate freely, selecting from an array of avatar-garment combinations and exploring garments from diverse perspectives. This immersive experience offers designers a precise representation of the final product’s aesthetics, fit, and functionality on the human body. Our application can be considered as a pre-manufacturing layer, that empowers designers and retailers with a precise understanding of how the actual garment will look and behave. Evaluation involved comprehensive feedback from both professional and undergraduate fashion designers, gathered through usability testing sessions.
{"title":"VR Designer: enhancing fashion showcases through immersive virtual garment fitting","authors":"Orestis Sarakatsanos, Anastasios Papazoglou-Chalikias, Machi Boikou, Elisavet Chatzilari, Michaela Jauk, Ursina Hafliger, Spiros Nikolopoulos, Ioannis Kompatsiaris","doi":"10.1007/s10055-024-00945-w","DOIUrl":"https://doi.org/10.1007/s10055-024-00945-w","url":null,"abstract":"<p>This paper introduces a Virtual Reality (VR) application tailored for fashion designers and retailers, transcending traditional garment design and demonstration boundaries by presenting an immersive digital garment showcase within a captivating VR environment. Simulating a virtual retail store, designers navigate freely, selecting from an array of avatar-garment combinations and exploring garments from diverse perspectives. This immersive experience offers designers a precise representation of the final product’s aesthetics, fit, and functionality on the human body. Our application can be considered as a pre-manufacturing layer, that empowers designers and retailers with a precise understanding of how the actual garment will look and behave. Evaluation involved comprehensive feedback from both professional and undergraduate fashion designers, gathered through usability testing sessions.</p>","PeriodicalId":23727,"journal":{"name":"Virtual Reality","volume":"24 1","pages":""},"PeriodicalIF":4.2,"publicationDate":"2024-03-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140073652","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-08DOI: 10.1007/s10055-024-00947-8
José L. Gómez-Sirvent, Alicia Fernández-Sotos, Antonio Fernández-Caballero, Desirée Fernández-Sotos
Performance anxiety is a common problem affecting musicians’ concentration and well-being. Musicians frequently encounter greater challenges and emotional discomfort when performing in front of an audience. Recent research suggests an important relationship between the characteristics of the built environment and people’s well-being. In this study, we explore modifying the built environment to create spaces where musicians are less aware of the presence of the audience and can express themselves more comfortably. An experiment was conducted with 61 conservatory musicians playing their instrument in a virtual auditorium in front of an audience of hundreds of virtual humans. They performed at different distances from the audience and under different levels of ambient lighting, while their eye movements were recorded. These data, together with questionnaires, were used to analyse the way the environment is perceived. The results showed that reducing the light intensity above the audience made the view of the auditorium more calming, and the same effect was observed when the distance between the audience and the musician was increased. Eye-tracking data showed a significant reduction in saccadic eye movements as the distance from the audience increased. This work provides a novel approach to architecture influence on musicians’ experience during solo performances. The findings are useful to designers and researchers.
{"title":"Assessment of music performance anxiety in a virtual auditorium through the study of ambient lighting and audience distance","authors":"José L. Gómez-Sirvent, Alicia Fernández-Sotos, Antonio Fernández-Caballero, Desirée Fernández-Sotos","doi":"10.1007/s10055-024-00947-8","DOIUrl":"https://doi.org/10.1007/s10055-024-00947-8","url":null,"abstract":"<p>Performance anxiety is a common problem affecting musicians’ concentration and well-being. Musicians frequently encounter greater challenges and emotional discomfort when performing in front of an audience. Recent research suggests an important relationship between the characteristics of the built environment and people’s well-being. In this study, we explore modifying the built environment to create spaces where musicians are less aware of the presence of the audience and can express themselves more comfortably. An experiment was conducted with 61 conservatory musicians playing their instrument in a virtual auditorium in front of an audience of hundreds of virtual humans. They performed at different distances from the audience and under different levels of ambient lighting, while their eye movements were recorded. These data, together with questionnaires, were used to analyse the way the environment is perceived. The results showed that reducing the light intensity above the audience made the view of the auditorium more calming, and the same effect was observed when the distance between the audience and the musician was increased. Eye-tracking data showed a significant reduction in saccadic eye movements as the distance from the audience increased. This work provides a novel approach to architecture influence on musicians’ experience during solo performances. The findings are useful to designers and researchers.</p>","PeriodicalId":23727,"journal":{"name":"Virtual Reality","volume":"28 1","pages":""},"PeriodicalIF":4.2,"publicationDate":"2024-03-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140076725","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-06DOI: 10.1007/s10055-024-00960-x
Laura Pérez-Pachón, Parivrudh Sharma, Helena Brech, Jenny Gregory, Terry Lowe, Matthieu Poyade, Flora Gröning
Novel augmented reality headsets such as HoloLens can be used to overlay patient-specific virtual models of resection margins on the patient’s skin, providing surgeons with information not normally available in the operating room. For this to be useful, surgeons wearing the headset must be able to localise virtual models accurately. We measured the error with which users localise virtual models at different positions and distances from their eyes. Healthy volunteers aged 20–59 years (n = 54) performed 81 exercises involving the localisation of a virtual hexagon’s vertices overlaid on a monitor surface. Nine predefined positions and three distances between the virtual hexagon and the users’ eyes (65, 85 and 105 cm) were set. We found that, some model positions and the shortest distance (65 cm) led to larger localisation errors than other positions and larger distances (85 and 105 cm). Positional errors of more than 5 mm and 1–5 mm margin errors were found in 29.8% and over 40% of cases, respectively. Strong outliers were also found (e.g. margin shrinkage of up to 17.4 mm in 4.3% of cases). The measured errors may result in poor outcomes of surgeries: e.g. incomplete tumour excision or inaccurate flap design, which can potentially lead to tumour recurrence and flap failure, respectively. Reducing localisation errors associated with arm reach distances between the virtual models and users’ eyes is necessary for augmented reality headsets to be suitable for surgical purposes. In addition, training surgeons on the use of these headsets may help to minimise localisation errors.
{"title":"Augmented reality headsets for surgical guidance: the impact of holographic model positions on user localisation accuracy","authors":"Laura Pérez-Pachón, Parivrudh Sharma, Helena Brech, Jenny Gregory, Terry Lowe, Matthieu Poyade, Flora Gröning","doi":"10.1007/s10055-024-00960-x","DOIUrl":"https://doi.org/10.1007/s10055-024-00960-x","url":null,"abstract":"<p>Novel augmented reality headsets such as HoloLens can be used to overlay patient-specific virtual models of resection margins on the patient’s skin, providing surgeons with information not normally available in the operating room. For this to be useful, surgeons wearing the headset must be able to localise virtual models accurately. We measured the error with which users localise virtual models at different positions and distances from their eyes. Healthy volunteers aged 20–59 years (<i>n</i> = 54) performed 81 exercises involving the localisation of a virtual hexagon’s vertices overlaid on a monitor surface. Nine predefined positions and three distances between the virtual hexagon and the users’ eyes (65, 85 and 105 cm) were set. We found that, some model positions and the shortest distance (65 cm) led to larger localisation errors than other positions and larger distances (85 and 105 cm). Positional errors of more than 5 mm and 1–5 mm margin errors were found in 29.8% and over 40% of cases, respectively. Strong outliers were also found (e.g. margin shrinkage of up to 17.4 mm in 4.3% of cases). The measured errors may result in poor outcomes of surgeries: e.g. incomplete tumour excision or inaccurate flap design, which can potentially lead to tumour recurrence and flap failure, respectively. Reducing localisation errors associated with arm reach distances between the virtual models and users’ eyes is necessary for augmented reality headsets to be suitable for surgical purposes. In addition, training surgeons on the use of these headsets may help to minimise localisation errors.</p>","PeriodicalId":23727,"journal":{"name":"Virtual Reality","volume":"44 1","pages":""},"PeriodicalIF":4.2,"publicationDate":"2024-03-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140055562","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-06DOI: 10.1007/s10055-024-00953-w
Mariano Banquiero, Gracia Valdeolivas, David Ramón, M.-Carmen Juan
This work presents the development of a mixed reality (MR) application that uses color Passthrough for learning to play the piano. A study was carried out to compare the interpretation outcomes of the participants and their subjective experience when using the MR application developed to learn to play the piano with a system that used Synthesia (N = 33). The results show that the MR application and Synthesia were effective in learning piano. However, the students played the pieces significantly better when using the MR application. The two applications both provided a satisfying user experience. However, the subjective experience of the students was better when they used the MR application. Other conclusions derived from the study include the following: (1) The outcomes of the students and their subjective opinion about the experience when using the MR application were independent of age and gender; (2) the sense of presence offered by the MR application was high (above 6 on a scale of 1 to 7); (3) the adverse effects induced by wearing the Meta Quest Pro and using our MR application were negligible; and (4) the students showed their preference for the MR application. As a conclusion, the advantage of our MR application compared to other types of applications (e.g., non-projected piano roll notation) is that the user has a direct view of the piano and the help elements appear integrated in the user’s view. The user does not have to take their eyes off the keyboard and is focused on playing the piano.
{"title":"A color Passthrough mixed reality application for learning piano","authors":"Mariano Banquiero, Gracia Valdeolivas, David Ramón, M.-Carmen Juan","doi":"10.1007/s10055-024-00953-w","DOIUrl":"https://doi.org/10.1007/s10055-024-00953-w","url":null,"abstract":"<p>This work presents the development of a mixed reality (MR) application that uses color Passthrough for learning to play the piano. A study was carried out to compare the interpretation outcomes of the participants and their subjective experience when using the MR application developed to learn to play the piano with a system that used Synthesia (<i>N</i> = 33). The results show that the MR application and Synthesia were effective in learning piano. However, the students played the pieces significantly better when using the MR application. The two applications both provided a satisfying user experience. However, the subjective experience of the students was better when they used the MR application. Other conclusions derived from the study include the following: (1) The outcomes of the students and their subjective opinion about the experience when using the MR application were independent of age and gender; (2) the sense of presence offered by the MR application was high (above 6 on a scale of 1 to 7); (3) the adverse effects induced by wearing the Meta Quest Pro and using our MR application were negligible; and (4) the students showed their preference for the MR application. As a conclusion, the advantage of our MR application compared to other types of applications (e.g., non-projected piano roll notation) is that the user has a direct view of the piano and the help elements appear integrated in the user’s view. The user does not have to take their eyes off the keyboard and is focused on playing the piano.</p>","PeriodicalId":23727,"journal":{"name":"Virtual Reality","volume":"82 1","pages":""},"PeriodicalIF":4.2,"publicationDate":"2024-03-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140055823","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}