Jun Ma, Shilin Wang, Aixin Zhang, Alan Wee-Chung Liew
{"title":"针对计算机生成视频攻击的视觉说话人身份识别特征提取","authors":"Jun Ma, Shilin Wang, Aixin Zhang, Alan Wee-Chung Liew","doi":"10.1109/ICIP40778.2020.9190976","DOIUrl":null,"url":null,"abstract":"Recent research shows that the lip feature can achieve reliable authentication performance with a good liveness detection ability. However, with the development of sophisticated human face generation methods by the deepfake technology, the talking videos can be forged with high quality and the static lip information is not reliable in such case. Meeting with such challenge, in this paper, we propose a new deep neural network structure to extract robust lip features against human and Computer-Generated (CG) imposters. Two novel network units, i.e. the feature-level Difference block (Diffblock) and the pixel-level Dynamic Response block (DRblock), are proposed to reduce the influence of the static lip information and to represent the dynamic talking habit information. Experiments on the GRID dataset have demonstrated that the proposed network can extract discriminative and robust lip features and outperform two state-of-the-art visual speaker authentication approaches in both human imposter and CG imposter scenarios.","PeriodicalId":405734,"journal":{"name":"2020 IEEE International Conference on Image Processing (ICIP)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"5","resultStr":"{\"title\":\"Feature Extraction For Visual Speaker Authentication Against Computer-Generated Video Attacks\",\"authors\":\"Jun Ma, Shilin Wang, Aixin Zhang, Alan Wee-Chung Liew\",\"doi\":\"10.1109/ICIP40778.2020.9190976\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Recent research shows that the lip feature can achieve reliable authentication performance with a good liveness detection ability. However, with the development of sophisticated human face generation methods by the deepfake technology, the talking videos can be forged with high quality and the static lip information is not reliable in such case. Meeting with such challenge, in this paper, we propose a new deep neural network structure to extract robust lip features against human and Computer-Generated (CG) imposters. Two novel network units, i.e. the feature-level Difference block (Diffblock) and the pixel-level Dynamic Response block (DRblock), are proposed to reduce the influence of the static lip information and to represent the dynamic talking habit information. Experiments on the GRID dataset have demonstrated that the proposed network can extract discriminative and robust lip features and outperform two state-of-the-art visual speaker authentication approaches in both human imposter and CG imposter scenarios.\",\"PeriodicalId\":405734,\"journal\":{\"name\":\"2020 IEEE International Conference on Image Processing (ICIP)\",\"volume\":\"22 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2020-10-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"5\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2020 IEEE International Conference on Image Processing (ICIP)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICIP40778.2020.9190976\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 IEEE International Conference on Image Processing (ICIP)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICIP40778.2020.9190976","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Feature Extraction For Visual Speaker Authentication Against Computer-Generated Video Attacks
Recent research shows that the lip feature can achieve reliable authentication performance with a good liveness detection ability. However, with the development of sophisticated human face generation methods by the deepfake technology, the talking videos can be forged with high quality and the static lip information is not reliable in such case. Meeting with such challenge, in this paper, we propose a new deep neural network structure to extract robust lip features against human and Computer-Generated (CG) imposters. Two novel network units, i.e. the feature-level Difference block (Diffblock) and the pixel-level Dynamic Response block (DRblock), are proposed to reduce the influence of the static lip information and to represent the dynamic talking habit information. Experiments on the GRID dataset have demonstrated that the proposed network can extract discriminative and robust lip features and outperform two state-of-the-art visual speaker authentication approaches in both human imposter and CG imposter scenarios.