Chit-Jie Chew, Yu-Cheng Lin, Ying-Chin Chen, Yun-Yi Fan, Jung-San Lee
{"title":"Preserving manipulated and synthetic Deepfake detection through face texture naturalness","authors":"Chit-Jie Chew, Yu-Cheng Lin, Ying-Chin Chen, Yun-Yi Fan, Jung-San Lee","doi":"10.1016/j.jisa.2024.103798","DOIUrl":null,"url":null,"abstract":"<div><p>With the rapid development of deep learning and face recognition technology, AI(Artificial Intelligence) experts have rated Deepfake cheating as the top AI threat. It is difficult for the human eye to distinguish the fake face images generated by Deepfake. Therefore, it has become a popular tool for criminals to seek benefits. Deepfake can be mainly divided into two types, a manipulated Deepfake that falsifies images of others by targeting real faces, and a synthetic Deepfake using GAN to generate a new fake image. So far, seldom cybersecurity system is able to detect these two types simultaneously. In this article, we aim to propose a hybrid Deepfake detection mechanism (HDDM) based on face texture and naturalness degree. HDDM constructs a unique texture from a facial image based on CNN(Convolutional Neural Network) and builds a naturalness degree recognition model via DNN(Deep Neural Network) to help cheating detection. Experimental results have proved that HDDM possesses a sound effect and stability for synthetic and manipulated Deepfake attacks. In particular, the WildDeepfake simulation has demonstrated the possibility of applying HDDM to the real world.</p></div>","PeriodicalId":48638,"journal":{"name":"Journal of Information Security and Applications","volume":"83 ","pages":"Article 103798"},"PeriodicalIF":3.8000,"publicationDate":"2024-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Information Security and Applications","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2214212624001017","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0
Abstract
With the rapid development of deep learning and face recognition technology, AI(Artificial Intelligence) experts have rated Deepfake cheating as the top AI threat. It is difficult for the human eye to distinguish the fake face images generated by Deepfake. Therefore, it has become a popular tool for criminals to seek benefits. Deepfake can be mainly divided into two types, a manipulated Deepfake that falsifies images of others by targeting real faces, and a synthetic Deepfake using GAN to generate a new fake image. So far, seldom cybersecurity system is able to detect these two types simultaneously. In this article, we aim to propose a hybrid Deepfake detection mechanism (HDDM) based on face texture and naturalness degree. HDDM constructs a unique texture from a facial image based on CNN(Convolutional Neural Network) and builds a naturalness degree recognition model via DNN(Deep Neural Network) to help cheating detection. Experimental results have proved that HDDM possesses a sound effect and stability for synthetic and manipulated Deepfake attacks. In particular, the WildDeepfake simulation has demonstrated the possibility of applying HDDM to the real world.
期刊介绍:
Journal of Information Security and Applications (JISA) focuses on the original research and practice-driven applications with relevance to information security and applications. JISA provides a common linkage between a vibrant scientific and research community and industry professionals by offering a clear view on modern problems and challenges in information security, as well as identifying promising scientific and "best-practice" solutions. JISA issues offer a balance between original research work and innovative industrial approaches by internationally renowned information security experts and researchers.