Markus Flatken;Simon Schneegans;Riccardo Fellegara;Andreas Gerndt
{"title":"Immersive and Interactive 3D Visualization of Large-Scale Geoscientific Data","authors":"Markus Flatken;Simon Schneegans;Riccardo Fellegara;Andreas Gerndt","doi":"10.1162/pres_a_00417","DOIUrl":"https://doi.org/10.1162/pres_a_00417","url":null,"abstract":"","PeriodicalId":101038,"journal":{"name":"Presence","volume":"33 ","pages":"57-76"},"PeriodicalIF":0.0,"publicationDate":"2024-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142918575","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Nicolò Dozio;Ludovico Rozza;Marek S. Lukasiewicz;Alessandro Colombo;Francesco Ferrise
Modern driver-assist and monitoring systems are severely limited by the lack of a precise understanding of how humans localize and predict the position of neighboring road users. Virtual Reality (VR) is a cost-efficient means to investigate these matters. However, human perception works differently in reality and in immersive virtual environments, with visible differences even between different VR environments. Therefore, when exploring human perception, the relevant perceptive parameters should first be characterized in the specific VR environment. In this paper, we report the results of two experiments that were designed to assess localization and prediction accuracy of static and moving visual targets in a VR setup developed using broadly available hardware and software solutions. Results of the first experiment provide a reference measure of the significant effect that distance and eccentricity have on localization error for static visual targets, while the second experiment shows the effect of time variables and contextual information on the localization accuracy of moving targets. These results provide a solid basis to test in VR the effects of different ergonomics and driver-vehicle interaction designs on perception accuracy.
{"title":"Localization and Prediction of Visual Targets' Position in Immersive Virtual Reality","authors":"Nicolò Dozio;Ludovico Rozza;Marek S. Lukasiewicz;Alessandro Colombo;Francesco Ferrise","doi":"10.1162/pres_a_00373","DOIUrl":"https://doi.org/10.1162/pres_a_00373","url":null,"abstract":"Modern driver-assist and monitoring systems are severely limited by the lack of a precise understanding of how humans localize and predict the position of neighboring road users. Virtual Reality (VR) is a cost-efficient means to investigate these matters. However, human perception works differently in reality and in immersive virtual environments, with visible differences even between different VR environments. Therefore, when exploring human perception, the relevant perceptive parameters should first be characterized in the specific VR environment. In this paper, we report the results of two experiments that were designed to assess localization and prediction accuracy of static and moving visual targets in a VR setup developed using broadly available hardware and software solutions. Results of the first experiment provide a reference measure of the significant effect that distance and eccentricity have on localization error for static visual targets, while the second experiment shows the effect of time variables and contextual information on the localization accuracy of moving targets. These results provide a solid basis to test in VR the effects of different ergonomics and driver-vehicle interaction designs on perception accuracy.","PeriodicalId":101038,"journal":{"name":"Presence","volume":"31 ","pages":"5-21"},"PeriodicalIF":0.0,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"71903645","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In the Software Engineering Education (SEE) context, virtual worlds have been used in order to improve learning outcomes. However, there is a gap in the literature in order to characterize the use of the Metaverse for SEE. The objective of this work is to characterize the state of the art of virtual worlds in SEE and provide research opportunities and challenges to fill the limitations found. We conducted a systematic literature review, and we established 8 research questions that guided the study, as well as performed data extraction. We report on 17 primary studies that deal mostly with immersive experiences in SEE. The results show some limitations: few Software Engineering (SE) topics are covered; most applications simulate environments and do not explore new ways of viewing and interacting; there is no interoperability between virtual worlds; learning analysis techniques are not applied; and biometric data are not considered in the validations of the studies. Although there are virtual worlds for SEE, the results indicate the need to develop mechanisms in order to support the integration between virtual worlds. Therefore, based on the findings of the review, we established a set of components grouped by 5 layers to enable the Metaverse for SEE through fundamental requirements. We hope that this work can motivate promising research in order to foster immersive learning experiences in SE through the Metaverse.
{"title":"A Scoping Review of the Metaverse for Software Engineering Education: Overview, Challenges, and Opportunities","authors":"Filipe A. Fernandes;Cláudia M. L. Werner","doi":"10.1162/pres_a_00371","DOIUrl":"https://doi.org/10.1162/pres_a_00371","url":null,"abstract":"In the Software Engineering Education (SEE) context, virtual worlds have been used in order to improve learning outcomes. However, there is a gap in the literature in order to characterize the use of the Metaverse for SEE. The objective of this work is to characterize the state of the art of virtual worlds in SEE and provide research opportunities and challenges to fill the limitations found. We conducted a systematic literature review, and we established 8 research questions that guided the study, as well as performed data extraction. We report on 17 primary studies that deal mostly with immersive experiences in SEE. The results show some limitations: few Software Engineering (SE) topics are covered; most applications simulate environments and do not explore new ways of viewing and interacting; there is no interoperability between virtual worlds; learning analysis techniques are not applied; and biometric data are not considered in the validations of the studies. Although there are virtual worlds for SEE, the results indicate the need to develop mechanisms in order to support the integration between virtual worlds. Therefore, based on the findings of the review, we established a set of components grouped by 5 layers to enable the Metaverse for SEE through fundamental requirements. We hope that this work can motivate promising research in order to foster immersive learning experiences in SE through the Metaverse.","PeriodicalId":101038,"journal":{"name":"Presence","volume":"31 ","pages":"107-146"},"PeriodicalIF":0.0,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"71903466","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ioannis Xenakis;Damianos Gavalas;Vlasios Kasapakis;Elena Dzardanova;Spyros Vosinakis
The emergence of metaverse signifies the transformation of virtual reality (VR) from an isolated digital experience into a social medium, which facilitates new contexts of information exchange and communication. In fact, VR comprises the first-ever computer-mediated communication paradigm that enables the transfer of a broad range of nonverbal cues, including some unique cues which are not even known from face-to-face social encounters. This highlights the urgency to theoretically and experimentally investigate aspects of nonverbal communication (NVC) in immersive virtual environments (IVEs). We provide a critical outlook on empirical studies aiming at widening the discussion on how presence, as a core social factor, is affected by the perception of nonverbal signals and how NVC may be effectively utilized to facilitate social interactions in immersive environments. Our review proposes a classification of the most fundamental cues and modalities of NVC, which we associate with conceptualizations of presence that are more relevant to interpersonal communication. We also investigate the NVC-related aspects that are essential to construct an “active” virtual self-concept and highlight associations among NVC-related aspects through forming a complex web of research topics coming from the field of IVEs. We establish that the key research challenge is to go beyond simply studying nonverbal cues and technological settings in isolation.
{"title":"Nonverbal Communication in Immersive Virtual Reality through the Lens of Presence: A Critical Review","authors":"Ioannis Xenakis;Damianos Gavalas;Vlasios Kasapakis;Elena Dzardanova;Spyros Vosinakis","doi":"10.1162/pres_a_00387","DOIUrl":"https://doi.org/10.1162/pres_a_00387","url":null,"abstract":"The emergence of metaverse signifies the transformation of virtual reality (VR) from an isolated digital experience into a social medium, which facilitates new contexts of information exchange and communication. In fact, VR comprises the first-ever computer-mediated communication paradigm that enables the transfer of a broad range of nonverbal cues, including some unique cues which are not even known from face-to-face social encounters. This highlights the urgency to theoretically and experimentally investigate aspects of nonverbal communication (NVC) in immersive virtual environments (IVEs). We provide a critical outlook on empirical studies aiming at widening the discussion on how presence, as a core social factor, is affected by the perception of nonverbal signals and how NVC may be effectively utilized to facilitate social interactions in immersive environments. Our review proposes a classification of the most fundamental cues and modalities of NVC, which we associate with conceptualizations of presence that are more relevant to interpersonal communication. We also investigate the NVC-related aspects that are essential to construct an “active” virtual self-concept and highlight associations among NVC-related aspects through forming a complex web of research topics coming from the field of IVEs. We establish that the key research challenge is to go beyond simply studying nonverbal cues and technological settings in isolation.","PeriodicalId":101038,"journal":{"name":"Presence","volume":"31 ","pages":"147-187"},"PeriodicalIF":0.0,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"71903467","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
As digital communication technologies advance, newer and more sophisticated cutting-edge ICT tools are being used for telecollaboration, including virtual reality (VR). Researchers have applied different models and approaches of multimodal analysis to understand the specific features of VR on students’ language learning (Dubovi, 2022; Friend & Mills, 2021) and intercultural communication (Rustam et al., 2020). Nevertheless, very little has been done to look into language teacher telecollaboration via VR technologies. This present study recruited student teachers of an additional language (LX) (Dewaele, 2017) from different geographical locations and cultural backgrounds to participate in a project aimed at cultivating their critical views on LX teaching and intercultural communication skills. The participants interacted and discussed LX teaching/learning issues in VR environments. Their interactions were video recorded and analyzed. By applying multimodal (inter)action analysis (MIA) (Norris, 2004) as the analytical framework, this study systematically unpacked the thematical saliencies and significant moments of the participating LX teachers’ intercultural interaction in the three VR meetings. Not only did they take on different approaches when hosting the meetings, but they also shifted attention/awareness during the intercultural communication processes. As communication became complex, they were challenged to overcome differences to reach the goal of collaborative LX teacher intercultural learning. Based on the findings and limitations of the present study, suggestions and caveats for future design and research of intercultural telecollaboration in VR environments are provided.
{"title":"Virtual Reality for Telecollaboration Among Teachers of an Additional Language: Insights from the Multimodal (Inter)action Analysis","authors":"Meei-Ling Liaw","doi":"10.1162/pres_a_00375","DOIUrl":"https://doi.org/10.1162/pres_a_00375","url":null,"abstract":"As digital communication technologies advance, newer and more sophisticated cutting-edge ICT tools are being used for telecollaboration, including virtual reality (VR). Researchers have applied different models and approaches of multimodal analysis to understand the specific features of VR on students’ language learning (Dubovi, 2022; Friend & Mills, 2021) and intercultural communication (Rustam et al., 2020). Nevertheless, very little has been done to look into language teacher telecollaboration via VR technologies. This present study recruited student teachers of an additional language (LX) (Dewaele, 2017) from different geographical locations and cultural backgrounds to participate in a project aimed at cultivating their critical views on LX teaching and intercultural communication skills. The participants interacted and discussed LX teaching/learning issues in VR environments. Their interactions were video recorded and analyzed. By applying multimodal (inter)action analysis (MIA) (Norris, 2004) as the analytical framework, this study systematically unpacked the thematical saliencies and significant moments of the participating LX teachers’ intercultural interaction in the three VR meetings. Not only did they take on different approaches when hosting the meetings, but they also shifted attention/awareness during the intercultural communication processes. As communication became complex, they were challenged to overcome differences to reach the goal of collaborative LX teacher intercultural learning. Based on the findings and limitations of the present study, suggestions and caveats for future design and research of intercultural telecollaboration in VR environments are provided.","PeriodicalId":101038,"journal":{"name":"Presence","volume":"31 ","pages":"69-87"},"PeriodicalIF":0.0,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"71903658","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Immersive technologies support educational activities and provide motivating contexts which are increasingly implemented in special education settings. Augmented Reality (AR) seems to improve the level of engagement in teaching and learning processes for all students, including students with Intellectual Disabilities (ID). However, there is a lack of research that investigates AR learning environments where students with ID can be involved in inquiry-based activities and acquire academic content linked to real situations. The purpose of this study was to implement a single-subject design and evaluate the effects of an AR system on students’ performance on the microscopic level of the structure of matter and especially the phase-states of water. A functional relationship was found between students’ correct responses during probe sessions and the AR inquiry-based intervention. In addition, a social validity assessment indicated that the AR glasses helped students with ID to acquire physics concepts, as well as inquiry skills in a vivid experience. The students also reported satisfaction from using the AR glasses. Suggestions for future research include the design of AR-based interventions for other science concepts for students with ID as well as other special educational needs.
{"title":"Augmented Reality in Physics Education: Students with Intellectual Disabilities Inquire the Structure of Matter","authors":"Georgia Iatraki;Tassos A. Mikropoulos","doi":"10.1162/pres_a_00374","DOIUrl":"https://doi.org/10.1162/pres_a_00374","url":null,"abstract":"Immersive technologies support educational activities and provide motivating contexts which are increasingly implemented in special education settings. Augmented Reality (AR) seems to improve the level of engagement in teaching and learning processes for all students, including students with Intellectual Disabilities (ID). However, there is a lack of research that investigates AR learning environments where students with ID can be involved in inquiry-based activities and acquire academic content linked to real situations. The purpose of this study was to implement a single-subject design and evaluate the effects of an AR system on students’ performance on the microscopic level of the structure of matter and especially the phase-states of water. A functional relationship was found between students’ correct responses during probe sessions and the AR inquiry-based intervention. In addition, a social validity assessment indicated that the AR glasses helped students with ID to acquire physics concepts, as well as inquiry skills in a vivid experience. The students also reported satisfaction from using the AR glasses. Suggestions for future research include the design of AR-based interventions for other science concepts for students with ID as well as other special educational needs.","PeriodicalId":101038,"journal":{"name":"Presence","volume":"31 ","pages":"89-106"},"PeriodicalIF":0.0,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"71903465","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The explosive growth of extended reality (XR) technologies during the Covid-19 pandemic (Koumaditis et al., 2021), the recent rise of high-quality virtual and augmented reality platforms that afford collaboration in shared hybrid spaces (Pidel & Ackermann, 2020), and the increased interest of both commercial and research institutions towards the design and development of the metaverse (Dwivedi et al., 2022) highlight the potential of XR to serve as a fundamental communication medium in the future (Dzardanova et al., 2022). XR technologies connect users, whether they are far away or close by, in shared virtual environments. These immersive spaces offer rich multisensory experiences, fostering meaningful communication. XR systems allow the co-presence of individuals in immersive or hybrid spaces through high-fidelity personalized avatars, physical presence, or live video transmission (Nguyen & Bednarz, 2020). Participants of these environments may communicate in real time through spatial voice and, using advanced tracking technologies, they may incorporate a multitude of nonverbal cues such as full-body gestures, gaze, and facial expressions (Kasapakis et al., 2021; Maloney et al., 2020; Baker et al., 2021). XR enhances natural and physical interactions through embodied interfaces, enabling multiple users to collaborate in immersive and visually appealing virtual environments. (Lee & Yoo, 2021). Those features render XR the most effective alternative to face-to-face communication and collaboration but pose challenges regarding their impact on the medium’s efficiency and overall user experience. Some of these challenges include, but are not limited to, usability and user experience in multi-user XR environments, nonverbal communication in immersive virtual reality, novel system setups for co-presence and remote collaboration, impact of embodiment in user behavior and social presence, and the design and evaluation of multi-user XR systems in specific application areas. Our interest in shedding more light on these issues inspired the call for this special issue. We aimed to collect and present recent advances related to XR systems and their affordances as communication and collaboration environments. The call sought original studies or reviews that address new challenges and implications and explore the potential of XR to serve as a communication medium along with the factors which can affect its efficiency and overall user experience. The Presence: Virtual and Augmented Reality special issue on “Extended Reality (XR) as a Communication Medium” received 23 submissions contributed
{"title":"Extended Reality (XR) as a Communication Medium: Special Issue Guest Editorial","authors":"Spyros Vosinakis;Vlasios Kasapakis;Damianos Gavalas","doi":"10.1162/pres_e_00388","DOIUrl":"https://doi.org/10.1162/pres_e_00388","url":null,"abstract":"The explosive growth of extended reality (XR) technologies during the Covid-19 pandemic (Koumaditis et al., 2021), the recent rise of high-quality virtual and augmented reality platforms that afford collaboration in shared hybrid spaces (Pidel & Ackermann, 2020), and the increased interest of both commercial and research institutions towards the design and development of the metaverse (Dwivedi et al., 2022) highlight the potential of XR to serve as a fundamental communication medium in the future (Dzardanova et al., 2022). XR technologies connect users, whether they are far away or close by, in shared virtual environments. These immersive spaces offer rich multisensory experiences, fostering meaningful communication. XR systems allow the co-presence of individuals in immersive or hybrid spaces through high-fidelity personalized avatars, physical presence, or live video transmission (Nguyen & Bednarz, 2020). Participants of these environments may communicate in real time through spatial voice and, using advanced tracking technologies, they may incorporate a multitude of nonverbal cues such as full-body gestures, gaze, and facial expressions (Kasapakis et al., 2021; Maloney et al., 2020; Baker et al., 2021). XR enhances natural and physical interactions through embodied interfaces, enabling multiple users to collaborate in immersive and visually appealing virtual environments. (Lee & Yoo, 2021). Those features render XR the most effective alternative to face-to-face communication and collaboration but pose challenges regarding their impact on the medium’s efficiency and overall user experience. Some of these challenges include, but are not limited to, usability and user experience in multi-user XR environments, nonverbal communication in immersive virtual reality, novel system setups for co-presence and remote collaboration, impact of embodiment in user behavior and social presence, and the design and evaluation of multi-user XR systems in specific application areas. Our interest in shedding more light on these issues inspired the call for this special issue. We aimed to collect and present recent advances related to XR systems and their affordances as communication and collaboration environments. The call sought original studies or reviews that address new challenges and implications and explore the potential of XR to serve as a communication medium along with the factors which can affect its efficiency and overall user experience. The Presence: Virtual and Augmented Reality special issue on “Extended Reality (XR) as a Communication Medium” received 23 submissions contributed","PeriodicalId":101038,"journal":{"name":"Presence","volume":"31 ","pages":"1-4"},"PeriodicalIF":0.0,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"71903644","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The present research investigates the effectiveness of using a telepresence system compared to a video conferencing system and the effectiveness of using two cameras compared to one camera for remote physical therapy. We used Telegie as our telepresence system, which allowed users to see an environment captured with RGBD cameras in 3D through a VR headset. Since both telepresence and the inclusion of a second camera provide users with additional spatial information, we examined this affordance within the relevant context of remote physical therapy. Our dyadic study across different time zones paired 11 physical therapists with 76 participants who took on the role of patients for a remote session. Our quantitative questionnaire data and qualitative interviews with therapists revealed several important findings. First, after controlling for individual differences among participants, using two cameras had a marginally significant positive effect on physical therapy assessment scores from therapists. Second, the spatial ability of patients was a strong predictor of therapist assessment. And third, the video clarity of remote communication systems mattered. Based on our findings, we offer several suggestions and insights towards the future use of telepresence systems for remote communication.
{"title":"An Evaluation Study of 2D and 3D Teleconferencing for Remote Physical Therapy","authors":"Hanseul Jun;Husam Shaik;Cyan DeVeaux;Michael Lewek;Henry Fuchs;Jeremy Bailenson","doi":"10.1162/pres_a_00379","DOIUrl":"https://doi.org/10.1162/pres_a_00379","url":null,"abstract":"The present research investigates the effectiveness of using a telepresence system compared to a video conferencing system and the effectiveness of using two cameras compared to one camera for remote physical therapy. We used Telegie as our telepresence system, which allowed users to see an environment captured with RGBD cameras in 3D through a VR headset. Since both telepresence and the inclusion of a second camera provide users with additional spatial information, we examined this affordance within the relevant context of remote physical therapy. Our dyadic study across different time zones paired 11 physical therapists with 76 participants who took on the role of patients for a remote session. Our quantitative questionnaire data and qualitative interviews with therapists revealed several important findings. First, after controlling for individual differences among participants, using two cameras had a marginally significant positive effect on physical therapy assessment scores from therapists. Second, the spatial ability of patients was a strong predictor of therapist assessment. And third, the video clarity of remote communication systems mattered. Based on our findings, we offer several suggestions and insights towards the future use of telepresence systems for remote communication.","PeriodicalId":101038,"journal":{"name":"Presence","volume":"31 ","pages":"47-67"},"PeriodicalIF":0.0,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"71903655","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, a method for 3D human body tracking using multiple cameras and an automatic evaluation method using machine learning are developed to construct a virtual reality (VR) dance self-training system for fast-moving hip-hop dance. Dancers’ movement data are input as time-series data of temporal changes in joint point positions and rotations and are categorized into instructional items that are frequently pointed out by coaches as areas for improvement in actual dance lessons. For automatic dance evaluation, contrastive learning is used to obtain better expression vectors with less data. As a result, the accuracy when using contrastive learning was 0.79, a significant improvement from 0.65 without contrastive learning. In addition, since each dance is modeled by a coach, the accuracy was slightly improved to 0.84 by using, as input, the difference between the expression vectors of the model's and the user's movement data. Eight subjects used the VR dance training system, and results of a questionnaire survey confirmed that the system is effective.
{"title":"VR Dance Training System Capable of Human Motion Tracking and Automatic Dance Evaluation","authors":"Kazuhiro Esaki;Katashi Nagao","doi":"10.1162/pres_a_00383","DOIUrl":"https://doi.org/10.1162/pres_a_00383","url":null,"abstract":"In this paper, a method for 3D human body tracking using multiple cameras and an automatic evaluation method using machine learning are developed to construct a virtual reality (VR) dance self-training system for fast-moving hip-hop dance. Dancers’ movement data are input as time-series data of temporal changes in joint point positions and rotations and are categorized into instructional items that are frequently pointed out by coaches as areas for improvement in actual dance lessons. For automatic dance evaluation, contrastive learning is used to obtain better expression vectors with less data. As a result, the accuracy when using contrastive learning was 0.79, a significant improvement from 0.65 without contrastive learning. In addition, since each dance is modeled by a coach, the accuracy was slightly improved to 0.84 by using, as input, the difference between the expression vectors of the model's and the user's movement data. Eight subjects used the VR dance training system, and results of a questionnaire survey confirmed that the system is effective.","PeriodicalId":101038,"journal":{"name":"Presence","volume":"31 ","pages":"23-45"},"PeriodicalIF":0.0,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"71903657","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}