Han Bao , Xuhong Zhang , Qinying Wang , Kangming Liang , Zonghui Wang , Shouling Ji , Wenzhi Chen
{"title":"MILG:利用音频调制图像绘制生成逼真的唇音同步视频","authors":"Han Bao , Xuhong Zhang , Qinying Wang , Kangming Liang , Zonghui Wang , Shouling Ji , Wenzhi Chen","doi":"10.1016/j.visinf.2024.08.002","DOIUrl":null,"url":null,"abstract":"<div><div>Existing lip synchronization (lip-sync) methods generate accurately synchronized mouths and faces in a generated video. However, they still confront the problem of artifacts in regions of non-interest (RONI), <em>e.g.</em>, background and other parts of a face, which decreases the overall visual quality. To solve these problems, we innovatively introduce diverse image inpainting to lip-sync generation. We propose Modulated Inpainting Lip-sync GAN (MILG), an audio-constraint inpainting network to predict synchronous mouths. MILG utilizes prior knowledge of RONI and audio sequences to predict lip shape instead of image generation, which can keep the RONI consistent. Specifically, we integrate modulated spatially probabilistic diversity normalization (MSPD Norm) in our inpainting network, which helps the network generate fine-grained diverse mouth movements guided by the continuous audio features. Furthermore, to lower the training overhead, we modify the contrastive loss in lip-sync to support small-batch-size and few-sample training. Extensive experiments demonstrate that our approach outperforms the existing state-of-the-art of image quality and authenticity while keeping lip-sync.</div></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"8 3","pages":"Pages 71-81"},"PeriodicalIF":3.8000,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"MILG: Realistic lip-sync video generation with audio-modulated image inpainting\",\"authors\":\"Han Bao , Xuhong Zhang , Qinying Wang , Kangming Liang , Zonghui Wang , Shouling Ji , Wenzhi Chen\",\"doi\":\"10.1016/j.visinf.2024.08.002\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Existing lip synchronization (lip-sync) methods generate accurately synchronized mouths and faces in a generated video. However, they still confront the problem of artifacts in regions of non-interest (RONI), <em>e.g.</em>, background and other parts of a face, which decreases the overall visual quality. To solve these problems, we innovatively introduce diverse image inpainting to lip-sync generation. We propose Modulated Inpainting Lip-sync GAN (MILG), an audio-constraint inpainting network to predict synchronous mouths. MILG utilizes prior knowledge of RONI and audio sequences to predict lip shape instead of image generation, which can keep the RONI consistent. Specifically, we integrate modulated spatially probabilistic diversity normalization (MSPD Norm) in our inpainting network, which helps the network generate fine-grained diverse mouth movements guided by the continuous audio features. Furthermore, to lower the training overhead, we modify the contrastive loss in lip-sync to support small-batch-size and few-sample training. Extensive experiments demonstrate that our approach outperforms the existing state-of-the-art of image quality and authenticity while keeping lip-sync.</div></div>\",\"PeriodicalId\":36903,\"journal\":{\"name\":\"Visual Informatics\",\"volume\":\"8 3\",\"pages\":\"Pages 71-81\"},\"PeriodicalIF\":3.8000,\"publicationDate\":\"2024-09-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Visual Informatics\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S2468502X24000408\",\"RegionNum\":3,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"COMPUTER SCIENCE, INFORMATION SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Visual Informatics","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2468502X24000408","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
MILG: Realistic lip-sync video generation with audio-modulated image inpainting
Existing lip synchronization (lip-sync) methods generate accurately synchronized mouths and faces in a generated video. However, they still confront the problem of artifacts in regions of non-interest (RONI), e.g., background and other parts of a face, which decreases the overall visual quality. To solve these problems, we innovatively introduce diverse image inpainting to lip-sync generation. We propose Modulated Inpainting Lip-sync GAN (MILG), an audio-constraint inpainting network to predict synchronous mouths. MILG utilizes prior knowledge of RONI and audio sequences to predict lip shape instead of image generation, which can keep the RONI consistent. Specifically, we integrate modulated spatially probabilistic diversity normalization (MSPD Norm) in our inpainting network, which helps the network generate fine-grained diverse mouth movements guided by the continuous audio features. Furthermore, to lower the training overhead, we modify the contrastive loss in lip-sync to support small-batch-size and few-sample training. Extensive experiments demonstrate that our approach outperforms the existing state-of-the-art of image quality and authenticity while keeping lip-sync.