K. Remya Revi, Meera Mary Isaac, R. Antony, M. Wilscy
{"title":"基于对手颜色局部二值模式和深度学习技术的gan生成假人脸图像检测","authors":"K. Remya Revi, Meera Mary Isaac, R. Antony, M. Wilscy","doi":"10.1109/CSI54720.2022.9924077","DOIUrl":null,"url":null,"abstract":"Advancements in AI techniques like Generative Adversarial Network (GAN) facilitate the creation of realistic-looking fake face images and these images are used to create fake profiles on various social media platforms. In this work, we develop deep learning-based binary classification models to distinguish GAN-generated fake face images from camera-captured real face images. The classification models are developed by fine-tuning three lightweight state-of-the-art pre-trained Convolutional Neural Networks (CNNs) - GoogLeNet, ResNet-18, and MobileNet-v2 -using the transfer learning approach. In this method, instead of RGB images, joint color texture feature maps of the images obtained using Opponent Color-Local Binary Pattern (OC-LBP) are used as input to the CNN. For the experimental analysis, we use datasets that contain fake face images generated by Progressive Growing GAN (PGGAN) and Style-based GAN (StyleGAN2), and camera-captured real face images from CelebFaces Attributes- High Quality (CelebA-HQ) and Flickr Faces High Quality (FFHQ) datasets. The proposed method shows remarkable performance in terms of test accuracy, generalization capability, and robustness against JPEG compression. Also, the method exhibits excellent performance when compared with state-of-the-art methods.","PeriodicalId":221137,"journal":{"name":"2022 International Conference on Connected Systems & Intelligence (CSI)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"GAN-generated Fake Face Image Detection using Opponent Color Local Binary Pattern and Deep Learning Technique\",\"authors\":\"K. Remya Revi, Meera Mary Isaac, R. Antony, M. Wilscy\",\"doi\":\"10.1109/CSI54720.2022.9924077\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Advancements in AI techniques like Generative Adversarial Network (GAN) facilitate the creation of realistic-looking fake face images and these images are used to create fake profiles on various social media platforms. In this work, we develop deep learning-based binary classification models to distinguish GAN-generated fake face images from camera-captured real face images. The classification models are developed by fine-tuning three lightweight state-of-the-art pre-trained Convolutional Neural Networks (CNNs) - GoogLeNet, ResNet-18, and MobileNet-v2 -using the transfer learning approach. In this method, instead of RGB images, joint color texture feature maps of the images obtained using Opponent Color-Local Binary Pattern (OC-LBP) are used as input to the CNN. For the experimental analysis, we use datasets that contain fake face images generated by Progressive Growing GAN (PGGAN) and Style-based GAN (StyleGAN2), and camera-captured real face images from CelebFaces Attributes- High Quality (CelebA-HQ) and Flickr Faces High Quality (FFHQ) datasets. The proposed method shows remarkable performance in terms of test accuracy, generalization capability, and robustness against JPEG compression. Also, the method exhibits excellent performance when compared with state-of-the-art methods.\",\"PeriodicalId\":221137,\"journal\":{\"name\":\"2022 International Conference on Connected Systems & Intelligence (CSI)\",\"volume\":\"22 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-08-31\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 International Conference on Connected Systems & Intelligence (CSI)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/CSI54720.2022.9924077\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 International Conference on Connected Systems & Intelligence (CSI)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CSI54720.2022.9924077","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
摘要
生成对抗网络(GAN)等人工智能技术的进步有助于创建逼真的假面部图像,这些图像用于在各种社交媒体平台上创建假个人资料。在这项工作中,我们开发了基于深度学习的二分类模型,以区分gan生成的假人脸图像和相机捕获的真实人脸图像。分类模型是通过使用迁移学习方法微调三个轻量级的最先进的预训练卷积神经网络(cnn)——GoogLeNet、ResNet-18和MobileNet-v2——开发的。该方法使用对手颜色局部二值模式(OC-LBP)获得的图像的联合颜色纹理特征图作为CNN的输入,而不是RGB图像。为了进行实验分析,我们使用了包含由Progressive Growing GAN (PGGAN)和Style-based GAN (StyleGAN2)生成的假人脸图像的数据集,以及从CelebA-HQ和Flickr Faces High Quality (FFHQ)数据集捕获的真实人脸图像。该方法在测试精度、泛化能力和对JPEG压缩的鲁棒性方面表现出显著的性能。此外,与最先进的方法相比,该方法表现出优异的性能。
GAN-generated Fake Face Image Detection using Opponent Color Local Binary Pattern and Deep Learning Technique
Advancements in AI techniques like Generative Adversarial Network (GAN) facilitate the creation of realistic-looking fake face images and these images are used to create fake profiles on various social media platforms. In this work, we develop deep learning-based binary classification models to distinguish GAN-generated fake face images from camera-captured real face images. The classification models are developed by fine-tuning three lightweight state-of-the-art pre-trained Convolutional Neural Networks (CNNs) - GoogLeNet, ResNet-18, and MobileNet-v2 -using the transfer learning approach. In this method, instead of RGB images, joint color texture feature maps of the images obtained using Opponent Color-Local Binary Pattern (OC-LBP) are used as input to the CNN. For the experimental analysis, we use datasets that contain fake face images generated by Progressive Growing GAN (PGGAN) and Style-based GAN (StyleGAN2), and camera-captured real face images from CelebFaces Attributes- High Quality (CelebA-HQ) and Flickr Faces High Quality (FFHQ) datasets. The proposed method shows remarkable performance in terms of test accuracy, generalization capability, and robustness against JPEG compression. Also, the method exhibits excellent performance when compared with state-of-the-art methods.