Pub Date : 2023-03-01DOI: 10.1109/VR55154.2023.00046
Ayush Bhargava, Roshan Venkatakrishnan, R. Venkatakrishnan, Hannah M. Solini, Kathryn M. Lucaites, Andrew C. Robb, C. Pagano, Sabarish V. Babu
Over the past two decades self-avatars have been shown to affect the perception of both oneself and of environmental properties including the sizes and distances of elements in immersive virtual environments. However, virtual avatars that accurately match the body proportions of their users remain inaccessible to the general public. As such, most virtual experiences that represent the user have a generic avatar that does not fit the proportions of the users' body. This can negatively affect judgments involving affordances, such as passability and maneuverability, which pertain to the relationship between the properties of environmental elements relative to the properties of the user providing information about actions that can be enacted. This is especially true when the task requires the user to maneuver around moving objects like in games. Therefore, it is necessary to understand how different sized self-avatars affect the perception of affordances in dynamic virtual environments. To better understand this, we conducted an experiment investigating how a self-avatar that is either the same size, 20% shorter, or 20% taller, than the user's own body affects passability judgments in a dynamic virtual environment. Our results suggest that the presence of self-avatars results in better regulatory and safer road crossing behavior, and helps participants synchronize self-motion to external stimuli quicker than in the absence of self-avatars.
{"title":"Empirically Evaluating the Effects of Eye Height and Self-Avatars on Dynamic Passability Affordances in Virtual Reality","authors":"Ayush Bhargava, Roshan Venkatakrishnan, R. Venkatakrishnan, Hannah M. Solini, Kathryn M. Lucaites, Andrew C. Robb, C. Pagano, Sabarish V. Babu","doi":"10.1109/VR55154.2023.00046","DOIUrl":"https://doi.org/10.1109/VR55154.2023.00046","url":null,"abstract":"Over the past two decades self-avatars have been shown to affect the perception of both oneself and of environmental properties including the sizes and distances of elements in immersive virtual environments. However, virtual avatars that accurately match the body proportions of their users remain inaccessible to the general public. As such, most virtual experiences that represent the user have a generic avatar that does not fit the proportions of the users' body. This can negatively affect judgments involving affordances, such as passability and maneuverability, which pertain to the relationship between the properties of environmental elements relative to the properties of the user providing information about actions that can be enacted. This is especially true when the task requires the user to maneuver around moving objects like in games. Therefore, it is necessary to understand how different sized self-avatars affect the perception of affordances in dynamic virtual environments. To better understand this, we conducted an experiment investigating how a self-avatar that is either the same size, 20% shorter, or 20% taller, than the user's own body affects passability judgments in a dynamic virtual environment. Our results suggest that the presence of self-avatars results in better regulatory and safer road crossing behavior, and helps participants synchronize self-motion to external stimuli quicker than in the absence of self-avatars.","PeriodicalId":346767,"journal":{"name":"2023 IEEE Conference Virtual Reality and 3D User Interfaces (VR)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131374971","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-03-01DOI: 10.1109/vr55154.2023.00001
{"title":"Half Title Page","authors":"","doi":"10.1109/vr55154.2023.00001","DOIUrl":"https://doi.org/10.1109/vr55154.2023.00001","url":null,"abstract":"","PeriodicalId":346767,"journal":{"name":"2023 IEEE Conference Virtual Reality and 3D User Interfaces (VR)","volume":"108 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133275218","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-03-01DOI: 10.1109/VR55154.2023.00042
Qilei Sun, Jiayou Huang, Haodong Zhang, Paul Craig, Lingyun Yu, Eng Gee Lim
This paper looks at how mixed reality can be used for the improvement and enhancement of Chinese acupuncture practice through the introduction of an acupuncture training simulator. A prototype system developed for our study allows practitioners to insert virtual needles using their bare hands into a full-scale 3D representation of the human body with labelled acupuncture points. This provides them with a safe and natural environment to develop their acupuncture skills simulating the actual physical process of acupuncture. It also helps them to develop their muscle memory for acupuncture and better develops their memory of acupuncture points through a more immersive learning experience. We describe some of the design decisions and technical challenges overcome in the development of our system. We also present the results of a comparative user evaluation with potential users aimed at assessing the viability of such a mixed reality system being used as part of their training and development. The results of our evaluation reveal the training system outperformed in the enhancement of spatial understanding as well as improved learning and dexterity in acupuncture practice. These results go some way to demonstrating the potential of mixed reality for improving practice in therapeutic medicine.
{"title":"Design and Development of a Mixed Reality Acupuncture Training System","authors":"Qilei Sun, Jiayou Huang, Haodong Zhang, Paul Craig, Lingyun Yu, Eng Gee Lim","doi":"10.1109/VR55154.2023.00042","DOIUrl":"https://doi.org/10.1109/VR55154.2023.00042","url":null,"abstract":"This paper looks at how mixed reality can be used for the improvement and enhancement of Chinese acupuncture practice through the introduction of an acupuncture training simulator. A prototype system developed for our study allows practitioners to insert virtual needles using their bare hands into a full-scale 3D representation of the human body with labelled acupuncture points. This provides them with a safe and natural environment to develop their acupuncture skills simulating the actual physical process of acupuncture. It also helps them to develop their muscle memory for acupuncture and better develops their memory of acupuncture points through a more immersive learning experience. We describe some of the design decisions and technical challenges overcome in the development of our system. We also present the results of a comparative user evaluation with potential users aimed at assessing the viability of such a mixed reality system being used as part of their training and development. The results of our evaluation reveal the training system outperformed in the enhancement of spatial understanding as well as improved learning and dexterity in acupuncture practice. These results go some way to demonstrating the potential of mixed reality for improving practice in therapeutic medicine.","PeriodicalId":346767,"journal":{"name":"2023 IEEE Conference Virtual Reality and 3D User Interfaces (VR)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130849558","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-03-01DOI: 10.1109/VR55154.2023.00023
Gareth Rendle, A. Kreskowski, Bernd Froehlich
RGBD cameras can capture users and their actions in the real world for reconstruction of photo-realistic volumetric avatars that allow rich interaction between spatially distributed telepresence parties in virtual environments. In this paper, we present and evaluate a system design that enables volumetric avatar reconstruction at increased frame rates. We demonstrate that we can overcome the limited capturing frame rate of commodity RGBD cameras such as the Azure Kinect by dividing a set of cameras into two spatio-temporally offset reconstruction groups and implementing a real-time reconstruction pipeline to fuse the temporally offset RGBD image streams. Comparisons of our proposed system against capture configurations possible with the same number of RGBD cameras indicate that it is beneficial to use a combination of spatially and temporally offset RGBD cameras, allowing increased reconstruction frame rates and scene coverage while producing temporally consistent volumetric avatars.
{"title":"Volumetric Avatar Reconstruction with Spatio-Temporally Offset RGBD Cameras","authors":"Gareth Rendle, A. Kreskowski, Bernd Froehlich","doi":"10.1109/VR55154.2023.00023","DOIUrl":"https://doi.org/10.1109/VR55154.2023.00023","url":null,"abstract":"RGBD cameras can capture users and their actions in the real world for reconstruction of photo-realistic volumetric avatars that allow rich interaction between spatially distributed telepresence parties in virtual environments. In this paper, we present and evaluate a system design that enables volumetric avatar reconstruction at increased frame rates. We demonstrate that we can overcome the limited capturing frame rate of commodity RGBD cameras such as the Azure Kinect by dividing a set of cameras into two spatio-temporally offset reconstruction groups and implementing a real-time reconstruction pipeline to fuse the temporally offset RGBD image streams. Comparisons of our proposed system against capture configurations possible with the same number of RGBD cameras indicate that it is beneficial to use a combination of spatially and temporally offset RGBD cameras, allowing increased reconstruction frame rates and scene coverage while producing temporally consistent volumetric avatars.","PeriodicalId":346767,"journal":{"name":"2023 IEEE Conference Virtual Reality and 3D User Interfaces (VR)","volume":"143 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124561094","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-03-01DOI: 10.1109/VR55154.2023.00063
Anil Ufuk Batmaz, M. H. Mughrabi, M. Sarac, Mayra Donaji Barrera Machuca, W. Stuerzlinger
State-of-the-art Virtual Reality (VR) and Augmented Reality (AR) headsets rely on singlefocal stereo displays. For objects away from the focal plane, such displays create a vergence-accommodation conflict (VAC), potentially degrading user interaction performance. In this paper, we study how the VAC affects pointing at targets within arm's reach with virtual hand and raycasting interaction in current stereo display systems. We use a previously proposed experimental methodology that extends the ISO 9241–411:2015 multi-directional selection task to enable fair comparisons between selecting targets in different display conditions. We conducted a user study with eighteen participants and the results indicate that participants were faster and had higher throughput in the constant VAC condition with the virtual hand. We hope that our results enable designers to choose more efficient interaction methods in virtual environments.
{"title":"Measuring the Effect of Stereo Deficiencies on Peripersonal Space Pointing","authors":"Anil Ufuk Batmaz, M. H. Mughrabi, M. Sarac, Mayra Donaji Barrera Machuca, W. Stuerzlinger","doi":"10.1109/VR55154.2023.00063","DOIUrl":"https://doi.org/10.1109/VR55154.2023.00063","url":null,"abstract":"State-of-the-art Virtual Reality (VR) and Augmented Reality (AR) headsets rely on singlefocal stereo displays. For objects away from the focal plane, such displays create a vergence-accommodation conflict (VAC), potentially degrading user interaction performance. In this paper, we study how the VAC affects pointing at targets within arm's reach with virtual hand and raycasting interaction in current stereo display systems. We use a previously proposed experimental methodology that extends the ISO 9241–411:2015 multi-directional selection task to enable fair comparisons between selecting targets in different display conditions. We conducted a user study with eighteen participants and the results indicate that participants were faster and had higher throughput in the constant VAC condition with the virtual hand. We hope that our results enable designers to choose more efficient interaction methods in virtual environments.","PeriodicalId":346767,"journal":{"name":"2023 IEEE Conference Virtual Reality and 3D User Interfaces (VR)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129721468","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
With redirected walking (RDW) technology, people can explore large virtual worlds in smaller physical spaces. RDW controls the trajectory of the user's walking in the physical space through subtle adjustments, so as to minimize the collision between the user and the physical space. Previous predictive algorithms place constraints on the user's path according to the spatial layouts of the virtual environment and work well when applicable, while reactive algorithms are more general for scenarios involving free exploration or uncon-strained movements. However, even in relatively free environments, we can predict the user's walking to a certain extent by analyzing the user's historical walking data, which can help the decision-making of reactive algorithms. This paper proposes a novel RDW method that improves the effect of real-time unrestricted RDW by analyzing and utilizing the user's historical walking data. In this method, the physical space is discretized by considering the user's location and orientation in the physical space. Using the weighted directed graph obtained from the user's historical walking data, we dynamically update the scores of different reachable poses in the physical space during the user's walking. We rank the scores and choose the optimal target position and orientation to guide the user to the best pose. Since simulation experiments have been shown to be effective in many previous RDW studies, we also provide a method to simulate user walking trajectories and generate a dataset. Experiments show that our method outperforms multiple state-of-the-art methods in various environments of different sizes and spatial layouts.
{"title":"Redirected Walking Based on Historical User Walking Data","authors":"Cheng-Wei Fan, Sen-Zhe Xu, Peng Yu, Fang-Lue Zhang, Songhai Zhang","doi":"10.1109/VR55154.2023.00021","DOIUrl":"https://doi.org/10.1109/VR55154.2023.00021","url":null,"abstract":"With redirected walking (RDW) technology, people can explore large virtual worlds in smaller physical spaces. RDW controls the trajectory of the user's walking in the physical space through subtle adjustments, so as to minimize the collision between the user and the physical space. Previous predictive algorithms place constraints on the user's path according to the spatial layouts of the virtual environment and work well when applicable, while reactive algorithms are more general for scenarios involving free exploration or uncon-strained movements. However, even in relatively free environments, we can predict the user's walking to a certain extent by analyzing the user's historical walking data, which can help the decision-making of reactive algorithms. This paper proposes a novel RDW method that improves the effect of real-time unrestricted RDW by analyzing and utilizing the user's historical walking data. In this method, the physical space is discretized by considering the user's location and orientation in the physical space. Using the weighted directed graph obtained from the user's historical walking data, we dynamically update the scores of different reachable poses in the physical space during the user's walking. We rank the scores and choose the optimal target position and orientation to guide the user to the best pose. Since simulation experiments have been shown to be effective in many previous RDW studies, we also provide a method to simulate user walking trajectories and generate a dataset. Experiments show that our method outperforms multiple state-of-the-art methods in various environments of different sizes and spatial layouts.","PeriodicalId":346767,"journal":{"name":"2023 IEEE Conference Virtual Reality and 3D User Interfaces (VR)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129706989","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-03-01DOI: 10.1109/VR55154.2023.00048
Gang Li, Katharina Margareta Theresa Pöhlmann, Mark Mcgill, C. Chen, S. Brewster, F. Pollick
VR (Virtual Reality) Motion Sickness (VRMS) refers to purely visually-induced motion sickness. Not everyone is susceptible to VRMS, but if experienced, nausea will often lead users to withdraw from the ongoing VR applications. VRMS represents a serious challenge in the field of VR ergonomics and human factors. Like other neuro-ergonomics researchers did before, this paper considers VRMS as a brain state problem as various etiologies of VRMS support the claim that VRMS is caused by disagreement between the vestibular and visual sensory inputs. However, what sets this work apart from the existing literature is that it explores anti-VRMS brain patterns via electroencephalogram (EEG) in VRMS-resistant individuals. Based on existing datasets of a previous study, we found enhanced theta activity in the left parietal cortex in VRMS-resistant individuals (N= 10) compared to VRMS-susceptible individuals (N=10). Even though the sample size per se is not large, this finding achieved medium effect size. This finding offers new hypotheses regarding how to reduce VRMS by the enhancement of brain functions per se (e.g., via non-invasive transcranial electrostimulation techniques) without the need to redesign the existing VR content.
{"title":"Exploring Neural Biomarkers in Young Adults Resistant to VR Motion Sickness: A Pilot Study of EEG","authors":"Gang Li, Katharina Margareta Theresa Pöhlmann, Mark Mcgill, C. Chen, S. Brewster, F. Pollick","doi":"10.1109/VR55154.2023.00048","DOIUrl":"https://doi.org/10.1109/VR55154.2023.00048","url":null,"abstract":"VR (Virtual Reality) Motion Sickness (VRMS) refers to purely visually-induced motion sickness. Not everyone is susceptible to VRMS, but if experienced, nausea will often lead users to withdraw from the ongoing VR applications. VRMS represents a serious challenge in the field of VR ergonomics and human factors. Like other neuro-ergonomics researchers did before, this paper considers VRMS as a brain state problem as various etiologies of VRMS support the claim that VRMS is caused by disagreement between the vestibular and visual sensory inputs. However, what sets this work apart from the existing literature is that it explores anti-VRMS brain patterns via electroencephalogram (EEG) in VRMS-resistant individuals. Based on existing datasets of a previous study, we found enhanced theta activity in the left parietal cortex in VRMS-resistant individuals (N= 10) compared to VRMS-susceptible individuals (N=10). Even though the sample size per se is not large, this finding achieved medium effect size. This finding offers new hypotheses regarding how to reduce VRMS by the enhancement of brain functions per se (e.g., via non-invasive transcranial electrostimulation techniques) without the need to redesign the existing VR content.","PeriodicalId":346767,"journal":{"name":"2023 IEEE Conference Virtual Reality and 3D User Interfaces (VR)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130192605","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-03-01DOI: 10.1109/VR55154.2023.00053
Sunyoung Bang, Woontack Woo
The reading experience on current augmented reality (AR) head mounted displays (HMDs) is often impeded by the devices' low perceived resolution, translucency, and small field of view, especially in situations involving lengthy text. Although many researchers have proposed methods to resolve this issue, the inherent characteristics prevent these displays from delivering a readability on par with that of more traditional displays. As a solution, we explore the use of smartphones as assistive displays to AR HMDs. To validate the feasibility of our approach, we conducted a user study in which we compared a smartphone-assisted hybrid interface against using the HMD only for two different text lengths. The results demonstrate that the hybrid interface yields a lower task load regardless of the text length, although it does not improve task performance. Furthermore, the hybrid interface provides a better experience regarding user comfort, visual fatigue, and perceived readability. Based on these results, we claim that joining the spatial output capabilities of the HMD with the high-resolution display of the smartphone is a viable solution for improving the reading experience in AR.
{"title":"Enhancing the Reading Experience on AR HMDs by Using Smartphones as Assistive Displays","authors":"Sunyoung Bang, Woontack Woo","doi":"10.1109/VR55154.2023.00053","DOIUrl":"https://doi.org/10.1109/VR55154.2023.00053","url":null,"abstract":"The reading experience on current augmented reality (AR) head mounted displays (HMDs) is often impeded by the devices' low perceived resolution, translucency, and small field of view, especially in situations involving lengthy text. Although many researchers have proposed methods to resolve this issue, the inherent characteristics prevent these displays from delivering a readability on par with that of more traditional displays. As a solution, we explore the use of smartphones as assistive displays to AR HMDs. To validate the feasibility of our approach, we conducted a user study in which we compared a smartphone-assisted hybrid interface against using the HMD only for two different text lengths. The results demonstrate that the hybrid interface yields a lower task load regardless of the text length, although it does not improve task performance. Furthermore, the hybrid interface provides a better experience regarding user comfort, visual fatigue, and perceived readability. Based on these results, we claim that joining the spatial output capabilities of the HMD with the high-resolution display of the smartphone is a viable solution for improving the reading experience in AR.","PeriodicalId":346767,"journal":{"name":"2023 IEEE Conference Virtual Reality and 3D User Interfaces (VR)","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125627616","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-03-01DOI: 10.1109/VR55154.2023.00050
Jingying Wang, Yilin Qiu, Keyu Chen, Yu Ding, Ye Pan
Avatars are one of the most important elements in virtual environments. Real-time facial retargeting technology is of vital importance in AR/VR interactions, the filmmaking, and the entertainment industry, and blendshapes for avatars are one of its important materials. Previous works either focused on the characters with the same topology, which cannot be generalized to universal avatars, or used optimization methods that have high demand on the dataset. In this paper, we adopt the essence of deep learning and feature transfer to realize deformation transfer, thereby generating blendshapes for target avatars based on the given sources. We proposed a Variational Autoencoder (VAE) to extract the latent space of the avatars and then use a Multilayer Perceptron (MLP) model to realize the translation between the latent spaces of the source avatar and target avatars. By decoding the latent code of different blendshapes, we can obtain the blendshapes for the target avatars with the same semantics as that of the source. We qualitatively and quantitatively compared our method with both classical and learning-based methods. The results revealed that the blendshapes generated by our method achieves higher similarity to the groundtruth blendshapes than the state-of-art methods. We also demonstrated that our method can be applied to expression transfer for stylized characters with different topologies.
{"title":"Fully Automatic Blendshape Generation for Stylized Characters","authors":"Jingying Wang, Yilin Qiu, Keyu Chen, Yu Ding, Ye Pan","doi":"10.1109/VR55154.2023.00050","DOIUrl":"https://doi.org/10.1109/VR55154.2023.00050","url":null,"abstract":"Avatars are one of the most important elements in virtual environments. Real-time facial retargeting technology is of vital importance in AR/VR interactions, the filmmaking, and the entertainment industry, and blendshapes for avatars are one of its important materials. Previous works either focused on the characters with the same topology, which cannot be generalized to universal avatars, or used optimization methods that have high demand on the dataset. In this paper, we adopt the essence of deep learning and feature transfer to realize deformation transfer, thereby generating blendshapes for target avatars based on the given sources. We proposed a Variational Autoencoder (VAE) to extract the latent space of the avatars and then use a Multilayer Perceptron (MLP) model to realize the translation between the latent spaces of the source avatar and target avatars. By decoding the latent code of different blendshapes, we can obtain the blendshapes for the target avatars with the same semantics as that of the source. We qualitatively and quantitatively compared our method with both classical and learning-based methods. The results revealed that the blendshapes generated by our method achieves higher similarity to the groundtruth blendshapes than the state-of-art methods. We also demonstrated that our method can be applied to expression transfer for stylized characters with different topologies.","PeriodicalId":346767,"journal":{"name":"2023 IEEE Conference Virtual Reality and 3D User Interfaces (VR)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124137195","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}