{"title":"Explainable Deep-Fake Detection Using Visual Interpretability Methods","authors":"Badhrinarayan Malolan, Ankit Parekh, F. Kazi","doi":"10.1109/ICICT50521.2020.00051","DOIUrl":null,"url":null,"abstract":"Deep-Fakes have sparked concerns throughout the world because of their potentially explosive consequences. A dystopian future where all forms of digital media are potentially compromised and public trust in Government is scarce doesn't seem far off. If not dealt with the requisite seriousness, the situation could easily spiral out of control. Current methods of Deep-Fake detection aim to accurately solve the issue at hand but may fail to convince a lay-person of its reliability and thus, lack the trust of the general public. Since the fundamental issue revolves around earning the trust of human agents, the construction of interpretable and also easily explainable models is imperative. We propose a framework to detect these Deep-Fake videos using a Deep Learning Approach: we have trained a Convolutional Neural Network architecture on a database of extracted faces from FaceForensics' DeepFakeDetection Dataset. Furthermore, we have tested the model on various Explainable AI techniques such as LRP and LIME to provide crisp visualizations of the salient regions of the image focused on by the model. The prospective and elusive goal is to localize the facial manipulations caused by Faceswaps. We hope to use this approach to build trust between AI and Human agents and to demonstrate the applicability of XAI in various real-life scenarios.","PeriodicalId":445000,"journal":{"name":"2020 3rd International Conference on Information and Computer Technologies (ICICT)","volume":"199 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"27","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 3rd International Conference on Information and Computer Technologies (ICICT)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICICT50521.2020.00051","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 27
Abstract
Deep-Fakes have sparked concerns throughout the world because of their potentially explosive consequences. A dystopian future where all forms of digital media are potentially compromised and public trust in Government is scarce doesn't seem far off. If not dealt with the requisite seriousness, the situation could easily spiral out of control. Current methods of Deep-Fake detection aim to accurately solve the issue at hand but may fail to convince a lay-person of its reliability and thus, lack the trust of the general public. Since the fundamental issue revolves around earning the trust of human agents, the construction of interpretable and also easily explainable models is imperative. We propose a framework to detect these Deep-Fake videos using a Deep Learning Approach: we have trained a Convolutional Neural Network architecture on a database of extracted faces from FaceForensics' DeepFakeDetection Dataset. Furthermore, we have tested the model on various Explainable AI techniques such as LRP and LIME to provide crisp visualizations of the salient regions of the image focused on by the model. The prospective and elusive goal is to localize the facial manipulations caused by Faceswaps. We hope to use this approach to build trust between AI and Human agents and to demonstrate the applicability of XAI in various real-life scenarios.