In this paper, we propose a user interface that enables users to intuitively retrieve relevant motions from a database and edit them by drawing motion trajectories on the screen. This system consists of two-stage operations to provide global-level and local-level motion editing: a global stage that enables users to design the body movement in virtual space roughly, and a local stage that enables users to design detailed movements such as limbs movement. We verified the proposed system with character animation editing with both global and local stages.
{"title":"Two-Stage Motion Editing Interface for Character Animation","authors":"Yichen Peng, Chunqi Zhao, Zhengyu Huang, Tsukasa Fukusato, Haoran Xie, K. Miyata","doi":"10.1145/3475946.3480960","DOIUrl":"https://doi.org/10.1145/3475946.3480960","url":null,"abstract":"In this paper, we propose a user interface that enables users to intuitively retrieve relevant motions from a database and edit them by drawing motion trajectories on the screen. This system consists of two-stage operations to provide global-level and local-level motion editing: a global stage that enables users to design the body movement in virtual space roughly, and a local stage that enables users to design detailed movements such as limbs movement. We verified the proposed system with character animation editing with both global and local stages.","PeriodicalId":300353,"journal":{"name":"The ACM SIGGRAPH / Eurographics Symposium on Computer Animation","volume":"20 2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123568947","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Xiaojun Zeng, S. Dwarakanath, Wuyue Lu, Masaki Nakada, Demetri Terzopoulos
The transfer of facial expressions from people to 3D face models is a classic computer graphics problem. In this paper, we present a novel, learning-based approach to transferring facial expressions and head movements from images and videos to a biomechanical model of the face-head-neck musculoskeletal complex. Specifically, leveraging the Facial Action Coding System (FACS) as an intermediate representation of the expression space, we train a deep neural network to take in FACS Action Units (AUs) and output suitable facial muscle and jaw activations for the biomechanical model. Through biomechanical simulation, the activations deform the face, thereby transferring the expression to the model. The success of our approach is demonstrated through experiments involving the transfer of a range of expressive facial images and videos onto our biomechanical face-head-neck complex.
{"title":"Facial Expression Transfer from Video Via Deep Learning","authors":"Xiaojun Zeng, S. Dwarakanath, Wuyue Lu, Masaki Nakada, Demetri Terzopoulos","doi":"10.1145/3475946.3480959","DOIUrl":"https://doi.org/10.1145/3475946.3480959","url":null,"abstract":"The transfer of facial expressions from people to 3D face models is a classic computer graphics problem. In this paper, we present a novel, learning-based approach to transferring facial expressions and head movements from images and videos to a biomechanical model of the face-head-neck musculoskeletal complex. Specifically, leveraging the Facial Action Coding System (FACS) as an intermediate representation of the expression space, we train a deep neural network to take in FACS Action Units (AUs) and output suitable facial muscle and jaw activations for the biomechanical model. Through biomechanical simulation, the activations deform the face, thereby transferring the expression to the model. The success of our approach is demonstrated through experiments involving the transfer of a range of expressive facial images and videos onto our biomechanical face-head-neck complex.","PeriodicalId":300353,"journal":{"name":"The ACM SIGGRAPH / Eurographics Symposium on Computer Animation","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132576958","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yifeng Jiang, Michelle Guo, Jiangshan Li, Ioannis Exarchos, Jiajun Wu, C. Liu
Creating virtual humans with embodied, human-like perceptual and actuation constraints has the promise to provide an integrated simulation platform for many scientific and engineering applications. We present Dynamic and Autonomous Simulated Human (DASH), an embodied virtual human that, given natural language commands, performs grasp-and-stack tasks in a physically-simulated cluttered environment solely using its own visual perception, proprioception, and touch, without requiring human motion data. By factoring the DASH system into a vision module, a language module, and manipulation modules of two skill categories, we can mix and match analytical and machine learning techniques for different modules so that DASH is able to not only perform randomly arranged tasks with a high success rate, but also do so under anthropomorphic constraints and with fluid and diverse motions. The modular design also favors analysis and extensibility to more complex manipulation skills.
{"title":"DASH: Modularized Human Manipulation Simulation with Vision and Language for Embodied AI","authors":"Yifeng Jiang, Michelle Guo, Jiangshan Li, Ioannis Exarchos, Jiajun Wu, C. Liu","doi":"10.1145/3475946.3480950","DOIUrl":"https://doi.org/10.1145/3475946.3480950","url":null,"abstract":"Creating virtual humans with embodied, human-like perceptual and actuation constraints has the promise to provide an integrated simulation platform for many scientific and engineering applications. We present Dynamic and Autonomous Simulated Human (DASH), an embodied virtual human that, given natural language commands, performs grasp-and-stack tasks in a physically-simulated cluttered environment solely using its own visual perception, proprioception, and touch, without requiring human motion data. By factoring the DASH system into a vision module, a language module, and manipulation modules of two skill categories, we can mix and match analytical and machine learning techniques for different modules so that DASH is able to not only perform randomly arranged tasks with a high success rate, but also do so under anthropomorphic constraints and with fluid and diverse motions. The modular design also favors analysis and extensibility to more complex manipulation skills.","PeriodicalId":300353,"journal":{"name":"The ACM SIGGRAPH / Eurographics Symposium on Computer Animation","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129242689","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}