Lingyu Xiong, Xize Cheng, Jintao Tan, Xianjia Wu, Xiandong Li, Lei Zhu, Fei Ma, Minglei Li, Huang Xu, Zhihu Hu
{"title":"SegTalker:基于分割的会说话人脸生成与遮罩引导的局部编辑","authors":"Lingyu Xiong, Xize Cheng, Jintao Tan, Xianjia Wu, Xiandong Li, Lei Zhu, Fei Ma, Minglei Li, Huang Xu, Zhihu Hu","doi":"arxiv-2409.03605","DOIUrl":null,"url":null,"abstract":"Audio-driven talking face generation aims to synthesize video with lip\nmovements synchronized to input audio. However, current generative techniques\nface challenges in preserving intricate regional textures (skin, teeth). To\naddress the aforementioned challenges, we propose a novel framework called\nSegTalker to decouple lip movements and image textures by introducing\nsegmentation as intermediate representation. Specifically, given the mask of\nimage employed by a parsing network, we first leverage the speech to drive the\nmask and generate talking segmentation. Then we disentangle semantic regions of\nimage into style codes using a mask-guided encoder. Ultimately, we inject the\npreviously generated talking segmentation and style codes into a mask-guided\nStyleGAN to synthesize video frame. In this way, most of textures are fully\npreserved. Moreover, our approach can inherently achieve background separation\nand facilitate mask-guided facial local editing. In particular, by editing the\nmask and swapping the region textures from a given reference image (e.g. hair,\nlip, eyebrows), our approach enables facial editing seamlessly when generating\ntalking face video. Experiments demonstrate that our proposed approach can\neffectively preserve texture details and generate temporally consistent video\nwhile remaining competitive in lip synchronization. Quantitative and\nqualitative results on the HDTF and MEAD datasets illustrate the superior\nperformance of our method over existing methods.","PeriodicalId":501480,"journal":{"name":"arXiv - CS - Multimedia","volume":"2 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"SegTalker: Segmentation-based Talking Face Generation with Mask-guided Local Editing\",\"authors\":\"Lingyu Xiong, Xize Cheng, Jintao Tan, Xianjia Wu, Xiandong Li, Lei Zhu, Fei Ma, Minglei Li, Huang Xu, Zhihu Hu\",\"doi\":\"arxiv-2409.03605\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Audio-driven talking face generation aims to synthesize video with lip\\nmovements synchronized to input audio. However, current generative techniques\\nface challenges in preserving intricate regional textures (skin, teeth). To\\naddress the aforementioned challenges, we propose a novel framework called\\nSegTalker to decouple lip movements and image textures by introducing\\nsegmentation as intermediate representation. Specifically, given the mask of\\nimage employed by a parsing network, we first leverage the speech to drive the\\nmask and generate talking segmentation. Then we disentangle semantic regions of\\nimage into style codes using a mask-guided encoder. Ultimately, we inject the\\npreviously generated talking segmentation and style codes into a mask-guided\\nStyleGAN to synthesize video frame. In this way, most of textures are fully\\npreserved. Moreover, our approach can inherently achieve background separation\\nand facilitate mask-guided facial local editing. In particular, by editing the\\nmask and swapping the region textures from a given reference image (e.g. hair,\\nlip, eyebrows), our approach enables facial editing seamlessly when generating\\ntalking face video. Experiments demonstrate that our proposed approach can\\neffectively preserve texture details and generate temporally consistent video\\nwhile remaining competitive in lip synchronization. Quantitative and\\nqualitative results on the HDTF and MEAD datasets illustrate the superior\\nperformance of our method over existing methods.\",\"PeriodicalId\":501480,\"journal\":{\"name\":\"arXiv - CS - Multimedia\",\"volume\":\"2 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-09-05\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - CS - Multimedia\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2409.03605\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Multimedia","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.03605","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
SegTalker: Segmentation-based Talking Face Generation with Mask-guided Local Editing
Audio-driven talking face generation aims to synthesize video with lip
movements synchronized to input audio. However, current generative techniques
face challenges in preserving intricate regional textures (skin, teeth). To
address the aforementioned challenges, we propose a novel framework called
SegTalker to decouple lip movements and image textures by introducing
segmentation as intermediate representation. Specifically, given the mask of
image employed by a parsing network, we first leverage the speech to drive the
mask and generate talking segmentation. Then we disentangle semantic regions of
image into style codes using a mask-guided encoder. Ultimately, we inject the
previously generated talking segmentation and style codes into a mask-guided
StyleGAN to synthesize video frame. In this way, most of textures are fully
preserved. Moreover, our approach can inherently achieve background separation
and facilitate mask-guided facial local editing. In particular, by editing the
mask and swapping the region textures from a given reference image (e.g. hair,
lip, eyebrows), our approach enables facial editing seamlessly when generating
talking face video. Experiments demonstrate that our proposed approach can
effectively preserve texture details and generate temporally consistent video
while remaining competitive in lip synchronization. Quantitative and
qualitative results on the HDTF and MEAD datasets illustrate the superior
performance of our method over existing methods.