An Experimental Evaluation on Deepfake Detection using Deep Face Recognition

Sreeraj Ramachandran, Aakash Varma Nadimpalli, A. Rattani
{"title":"An Experimental Evaluation on Deepfake Detection using Deep Face Recognition","authors":"Sreeraj Ramachandran, Aakash Varma Nadimpalli, A. Rattani","doi":"10.1109/ICCST49569.2021.9717407","DOIUrl":null,"url":null,"abstract":"Significant advances in deep learning have obtained hallmark accuracy rates for various computer vision applications. However, advances in deep generative models have also led to the generation of very realistic fake content, also known as deepfakes, causing a threat to privacy, democracy, and national security. Most of the current deepfake detection methods are deemed as a binary classification problem in distinguishing authentic images or videos from fake ones using two-class convolutional neural networks (CNNs). These methods are based on detecting visual artifacts, temporal or color inconsistencies produced by deep generative models. However, these methods require a large amount of real and fake data for model training and their performance drops significantly in cross dataset evaluation with samples generated using advanced deepfake generation techniques. In this paper, we thoroughly evaluate the efficacy of deep face recognition in identifying deepfakes, using different loss functions and deepfake generation techniques. Experimental investigations on challenging Celeb-DF and FaceForensics++ deepfake datasets suggest the efficacy of deep face recognition in identifying deepfakes over two-class CNNs and the ocular modality. Reported results suggest a maximum Area Under Curve (AUC) of 0.98 and Equal Error Rate (EER) of 7.1% in detecting deepfakes using face recognition on the Celeb-DF dataset. This EER is lower by 16.6% compared to the EER obtained for the two-class CNN and the ocular modality on the Celeb-DF dataset. Further on the FaceForensics++ dataset, an AUC of 0.99 and EER of 2.04% were obtained. The use of biometric facial recognition technology has the advantage of bypassing the need for a large amount of fake data for model training and obtaining better generalizability to evolving deepfake creation techniques.","PeriodicalId":101539,"journal":{"name":"2021 International Carnahan Conference on Security Technology (ICCST)","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2021-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"14","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 International Carnahan Conference on Security Technology (ICCST)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICCST49569.2021.9717407","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 14

Abstract

Significant advances in deep learning have obtained hallmark accuracy rates for various computer vision applications. However, advances in deep generative models have also led to the generation of very realistic fake content, also known as deepfakes, causing a threat to privacy, democracy, and national security. Most of the current deepfake detection methods are deemed as a binary classification problem in distinguishing authentic images or videos from fake ones using two-class convolutional neural networks (CNNs). These methods are based on detecting visual artifacts, temporal or color inconsistencies produced by deep generative models. However, these methods require a large amount of real and fake data for model training and their performance drops significantly in cross dataset evaluation with samples generated using advanced deepfake generation techniques. In this paper, we thoroughly evaluate the efficacy of deep face recognition in identifying deepfakes, using different loss functions and deepfake generation techniques. Experimental investigations on challenging Celeb-DF and FaceForensics++ deepfake datasets suggest the efficacy of deep face recognition in identifying deepfakes over two-class CNNs and the ocular modality. Reported results suggest a maximum Area Under Curve (AUC) of 0.98 and Equal Error Rate (EER) of 7.1% in detecting deepfakes using face recognition on the Celeb-DF dataset. This EER is lower by 16.6% compared to the EER obtained for the two-class CNN and the ocular modality on the Celeb-DF dataset. Further on the FaceForensics++ dataset, an AUC of 0.99 and EER of 2.04% were obtained. The use of biometric facial recognition technology has the advantage of bypassing the need for a large amount of fake data for model training and obtaining better generalizability to evolving deepfake creation techniques.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
基于深度人脸识别的深度伪造检测实验评估
深度学习的重大进展已经在各种计算机视觉应用中获得了标志性的准确率。然而,深度生成模型的进步也导致了非常逼真的虚假内容的产生,也被称为深度造假,对隐私、民主和国家安全造成了威胁。目前大多数深度伪造检测方法都被认为是一个使用两类卷积神经网络(cnn)来区分真实图像或视频与虚假图像或视频的二分类问题。这些方法是基于检测由深度生成模型产生的视觉伪影、时间或颜色不一致。然而,这些方法需要大量的真实和虚假数据进行模型训练,并且在使用高级深度生成技术生成的样本进行交叉数据集评估时,其性能显着下降。在本文中,我们使用不同的损失函数和深度伪造生成技术,全面评估了深度人脸识别在识别深度伪造方面的有效性。对Celeb-DF和face取证++深度伪造数据集的实验研究表明,深度人脸识别在识别两类cnn和眼模态的深度伪造方面是有效的。报告的结果表明,在Celeb-DF数据集上使用人脸识别检测深度伪造时,曲线下面积(AUC)的最大值为0.98,相等错误率(EER)为7.1%。与在Celeb-DF数据集上使用两类CNN和眼模态获得的EER相比,该EER降低了16.6%。在face取证++数据集上,AUC为0.99,EER为2.04%。使用生物特征人脸识别技术的优点是绕过了对大量假数据进行模型训练的需要,并且可以更好地推广到不断发展的深度假创建技术。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Emotional analysis of safeness and risk perception of London and Rome railway stations during the COVID-19 pandemic Assessment of Foreign Trailer Number Plates on UK Roads, Implications for ANPR Evaluation of Electrocardiogram Biometric Verification Models Based on Short Enrollment Time on Medical and Wearable Recorders Emotional reactions to risk perception in the Herculaneum Archaeological Park Face Masks Usage Monitoring for Public Health Security using Computer Vision on Hardware
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1