{"title":"Few-shot Adversarial Audio Driving Talking Face Generation","authors":"Ruyi Chen, Shengwu Xiong","doi":"10.1145/3503047.3503054","DOIUrl":null,"url":null,"abstract":"Talking-face generation is an interesting and challenging problem in computer vision and has become a research focus. This project aims to generate real talking-face video sequences, especially lip synchronization and head motion. In order to create a personalized talking-face model, these works require training on large-scale audio-visual datasets. However, in many practical scenarios, the personalized appearance features, and audio-video synchronization relationships need to be learned from a few lip synchronization sequences. In this paper, we consider it as a few-shot image synchronization problem: synthesizing talking-face with audio if there are additionally a few lip-synchronized video sequences as the learning task? We apply the reptile methods to train the meta adversarial networks and this meta-model could be adapted on just a few references sequences and done quickly to learn the personalized references models. With meta-learning on the dataset, the model can learn the initialization parameters. And with few adapt steps on the reference sequences, the model can learn quickly and generate highly realistic images with more facial texture and lip-sync. Experiments on several datasets demonstrated significantly better results obtained by our methods than the state-of-the-art methods in both quantitative and quantitative comparisons.","PeriodicalId":190604,"journal":{"name":"Proceedings of the 3rd International Conference on Advanced Information Science and System","volume":"24 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 3rd International Conference on Advanced Information Science and System","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3503047.3503054","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Talking-face generation is an interesting and challenging problem in computer vision and has become a research focus. This project aims to generate real talking-face video sequences, especially lip synchronization and head motion. In order to create a personalized talking-face model, these works require training on large-scale audio-visual datasets. However, in many practical scenarios, the personalized appearance features, and audio-video synchronization relationships need to be learned from a few lip synchronization sequences. In this paper, we consider it as a few-shot image synchronization problem: synthesizing talking-face with audio if there are additionally a few lip-synchronized video sequences as the learning task? We apply the reptile methods to train the meta adversarial networks and this meta-model could be adapted on just a few references sequences and done quickly to learn the personalized references models. With meta-learning on the dataset, the model can learn the initialization parameters. And with few adapt steps on the reference sequences, the model can learn quickly and generate highly realistic images with more facial texture and lip-sync. Experiments on several datasets demonstrated significantly better results obtained by our methods than the state-of-the-art methods in both quantitative and quantitative comparisons.