Physically based cloth simulation with nonlinear behaviors is studied in this article by employing isogeometric analysis (IGA) for the surface deformation in 3D space. State-of-the-art simulation techniques, which primarily rely on the triangular mesh to calculate physical points on the cloth directly, require a large number of degrees of freedom. An effective method for the cloth deformation that employs high-order continuous B-spline surfaces dependent on control points is proposed. This method leads to the merit of fewer degrees of freedom and superior smoothness. The deformation gradient on the high-order IGA element is then represented by the gradient of the B-spline function. An iterative method for solving the nonlinear optimization transferred from the implicit integration and a direct implicit–explicit method are derived on the basis of elastic force calculation to improve efficiency. The knots of the representation are effectively utilized in collision detection and response to reduce the computational burden. Experiments of nonlinear cloth simulation demonstrate the superiority of the proposed method considering performance and efficiency, achieving accurate, efficient, and stable deformation.
{"title":"Nonlinear cloth simulation with isogeometric analysis","authors":"Jingwen Ren, Hongwei Lin","doi":"10.1002/cav.2204","DOIUrl":"10.1002/cav.2204","url":null,"abstract":"<p>Physically based cloth simulation with nonlinear behaviors is studied in this article by employing isogeometric analysis (IGA) for the surface deformation in 3D space. State-of-the-art simulation techniques, which primarily rely on the triangular mesh to calculate physical points on the cloth directly, require a large number of degrees of freedom. An effective method for the cloth deformation that employs high-order continuous B-spline surfaces dependent on control points is proposed. This method leads to the merit of fewer degrees of freedom and superior smoothness. The deformation gradient on the high-order IGA element is then represented by the gradient of the B-spline function. An iterative method for solving the nonlinear optimization transferred from the implicit integration and a direct implicit–explicit method are derived on the basis of elastic force calculation to improve efficiency. The knots of the representation are effectively utilized in collision detection and response to reduce the computational burden. Experiments of nonlinear cloth simulation demonstrate the superiority of the proposed method considering performance and efficiency, achieving accurate, efficient, and stable deformation.</p>","PeriodicalId":50645,"journal":{"name":"Computer Animation and Virtual Worlds","volume":"35 1","pages":""},"PeriodicalIF":1.1,"publicationDate":"2023-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42704683","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In recent years, the short video industry is booming. However, there are still many difficulties in the action generation of virtual characters. We observed that on the short video social platform, “hand gesture dance” is a very popular short video form. However, its development is limited by the professionalism of choreography. In order to solve these problems, we propose an intelligent choreography framework, which can generate new gesture sequences for unseen audio based on pairing data in the database. Our framework adopts multimodal method and obtains excellent results. In additional, we collected and produced the first and largest pair labeled hand gesture dance data set. Various experiments showed that our results not only generate smooth and rich action sequences, but also collect some semantic information contained in the audio.
{"title":"Music conditioned 2D hand gesture dance generation with HGS","authors":"Dian Zhou, Shiguang Liu, Qing Xu","doi":"10.1002/cav.2211","DOIUrl":"10.1002/cav.2211","url":null,"abstract":"<p>In recent years, the short video industry is booming. However, there are still many difficulties in the action generation of virtual characters. We observed that on the short video social platform, “hand gesture dance” is a very popular short video form. However, its development is limited by the professionalism of choreography. In order to solve these problems, we propose an intelligent choreography framework, which can generate new gesture sequences for unseen audio based on pairing data in the database. Our framework adopts multimodal method and obtains excellent results. In additional, we collected and produced the first and largest pair labeled hand gesture dance data set. Various experiments showed that our results not only generate smooth and rich action sequences, but also collect some semantic information contained in the audio.</p>","PeriodicalId":50645,"journal":{"name":"Computer Animation and Virtual Worlds","volume":"35 1","pages":""},"PeriodicalIF":1.1,"publicationDate":"2023-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42092778","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yu Liu, Enquan Huang, Ziyu Zhou, Kexuan Wang, Shu Liu
Facial attractiveness prediction is an important research topic in the computer vision community. It not only contributes to the development of interdisciplinary research in psychology and sociology, but also provides fundamental technical support for applications like aesthetic medicine and social media. With the advances in 3D data acquisition and feature representation, this paper aims to investigate the facial attractiveness from deep learning and three-dimensional perspectives. The 3D faces are first processed to unwrap the texture images and refine the raw meshes. The feature extraction networks for texture, point cloud, and mesh are then delicately designed, considering the characteristics of different types of data. A more discriminative face representation is derived by feature fusion for the final attractiveness prediction. During network training, the cyclical learning rate with an improved range test is introduced, so as to alleviate the difficulty in hyperparameter setting. Extensive experiments are conducted on a 3D FAP benchmark, where the results demonstrate the significance of deep feature fusion and enhanced learning rate in cooperatively facilitating the performance. Specifically, the fusion of texture image and point cloud achieves the best overall prediction, with PC, MAE, and RMSE of 0.7908, 0.4153, and 0.5231, respectively.
{"title":"3D facial attractiveness prediction based on deep feature fusion","authors":"Yu Liu, Enquan Huang, Ziyu Zhou, Kexuan Wang, Shu Liu","doi":"10.1002/cav.2203","DOIUrl":"10.1002/cav.2203","url":null,"abstract":"<p>Facial attractiveness prediction is an important research topic in the computer vision community. It not only contributes to the development of interdisciplinary research in psychology and sociology, but also provides fundamental technical support for applications like aesthetic medicine and social media. With the advances in 3D data acquisition and feature representation, this paper aims to investigate the facial attractiveness from deep learning and three-dimensional perspectives. The 3D faces are first processed to unwrap the texture images and refine the raw meshes. The feature extraction networks for texture, point cloud, and mesh are then delicately designed, considering the characteristics of different types of data. A more discriminative face representation is derived by feature fusion for the final attractiveness prediction. During network training, the cyclical learning rate with an improved range test is introduced, so as to alleviate the difficulty in hyperparameter setting. Extensive experiments are conducted on a 3D FAP benchmark, where the results demonstrate the significance of deep feature fusion and enhanced learning rate in cooperatively facilitating the performance. Specifically, the fusion of texture image and point cloud achieves the best overall prediction, with PC, MAE, and RMSE of 0.7908, 0.4153, and 0.5231, respectively.</p>","PeriodicalId":50645,"journal":{"name":"Computer Animation and Virtual Worlds","volume":"35 1","pages":""},"PeriodicalIF":1.1,"publicationDate":"2023-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48364282","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-08-22DOI: 10.3390/virtualworlds2030015
T. Gorichanaz, A. Lavdas, Michael W. Mehaffy, N. Salingaros
It is well-recognized that online experience can carry profound impacts on health and well-being, particularly for young people. Research has already documented influences from cyberbullying, heightened feelings of inadequacy, and the relative decline of face-to-face interactions and active lifestyles. Less attention has been given to the health impacts of aesthetic experiences of online users, particularly gamers and other users of immersive virtual reality (VR) technologies. However, a significant body of research has begun to document the surprisingly strong yet previously unrecognized impacts of aesthetic experiences on health and well-being in other arenas of life. Other researchers have used both fixed laboratory and wearable sensors and, to a lesser extent, user surveys to measure indicators of activation level, mood, and stress level, which detect physiological markers for health. In this study, we assessed the evidence that online sensorial experience is no less important than in the physical world, with the capacity for both harmful effects and salutogenic benefits. We explore the implications for online design and propose an outline for further research.
{"title":"The Impacts of Online Experience on Health and Well-Being: The Overlooked Aesthetic Dimension","authors":"T. Gorichanaz, A. Lavdas, Michael W. Mehaffy, N. Salingaros","doi":"10.3390/virtualworlds2030015","DOIUrl":"https://doi.org/10.3390/virtualworlds2030015","url":null,"abstract":"It is well-recognized that online experience can carry profound impacts on health and well-being, particularly for young people. Research has already documented influences from cyberbullying, heightened feelings of inadequacy, and the relative decline of face-to-face interactions and active lifestyles. Less attention has been given to the health impacts of aesthetic experiences of online users, particularly gamers and other users of immersive virtual reality (VR) technologies. However, a significant body of research has begun to document the surprisingly strong yet previously unrecognized impacts of aesthetic experiences on health and well-being in other arenas of life. Other researchers have used both fixed laboratory and wearable sensors and, to a lesser extent, user surveys to measure indicators of activation level, mood, and stress level, which detect physiological markers for health. In this study, we assessed the evidence that online sensorial experience is no less important than in the physical world, with the capacity for both harmful effects and salutogenic benefits. We explore the implications for online design and propose an outline for further research.","PeriodicalId":50645,"journal":{"name":"Computer Animation and Virtual Worlds","volume":"13 1","pages":""},"PeriodicalIF":1.1,"publicationDate":"2023-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90438775","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Haoxiang Wang, Xiaoping Che, Enyao Chang, Chenxin Qu, Yao Luo, Zhenlin Wei
Virtual reality (VR) interaction safety is a prerequisite for all user activities in the virtual environment. While seeking a deep sense of immersion with little concern about surrounding obstacles, users may have limited ability to perceive the real-world space, resulting in possible collisions with real-world objects. Nowadays, recent works and rendering techniques such as the Chaperone can provide safety boundaries to users but confines them in a small static space and lack of immediacy. To solve this problem, we propose a dynamic approach based on user motion prediction named SCARF, which uses Spearman's correlation analysis, rule learning, and few-shot learning to achieve prediction of user movements in specific VR tasks. Specifically, we study the relationship between user characteristics, human motion, and categories of VR tasks and provides an approach that uses biomechanical analysis to define the interaction space in VR dynamically.We report on a user study with 58 volunteers and establish a three dimensional kinematic dataset from a VR game. The experiments validate that our few-shot learning model is effective and can improve the performance of motion prediction. Finally, we implement SCARF in VR environment for dynamic safety boundary adjustment.
{"title":"How to set safety boundary in virtual reality: A dynamic approach based on user motion prediction","authors":"Haoxiang Wang, Xiaoping Che, Enyao Chang, Chenxin Qu, Yao Luo, Zhenlin Wei","doi":"10.1002/cav.2210","DOIUrl":"10.1002/cav.2210","url":null,"abstract":"<p>Virtual reality (VR) interaction safety is a prerequisite for all user activities in the virtual environment. While seeking a deep sense of immersion with little concern about surrounding obstacles, users may have limited ability to perceive the real-world space, resulting in possible collisions with real-world objects. Nowadays, recent works and rendering techniques such as the Chaperone can provide safety boundaries to users but confines them in a small static space and lack of immediacy. To solve this problem, we propose a dynamic approach based on user motion prediction named SCARF, which uses Spearman's correlation analysis, rule learning, and few-shot learning to achieve prediction of user movements in specific VR tasks. Specifically, we study the relationship between user characteristics, human motion, and categories of VR tasks and provides an approach that uses biomechanical analysis to define the interaction space in VR dynamically.We report on a user study with 58 volunteers and establish a three dimensional kinematic dataset from a VR game. The experiments validate that our few-shot learning model is effective and can improve the performance of motion prediction. Finally, we implement SCARF in VR environment for dynamic safety boundary adjustment.</p>","PeriodicalId":50645,"journal":{"name":"Computer Animation and Virtual Worlds","volume":"35 1","pages":""},"PeriodicalIF":1.1,"publicationDate":"2023-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46705244","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}