{"title":"Digital Image Forensic Analyzer to Detect AI-generated Fake Images","authors":"Galamo Monkam, Jie Yan","doi":"10.1109/CACRE58689.2023.10208613","DOIUrl":null,"url":null,"abstract":"In recent years, the widespread use of smartphones and social media has led to a surge in the amount of digital content available. However, this increase in the use of digital images has also led to a rise in the use of techniques to alter image contents. Therefore, it is essential for both the image forensics field and the general public to be able to differentiate between genuine or authentic images and manipulated or fake imagery. Deep learning has made it easier to create unreal images, which underscores the need to establish a more robust platform to detect real from fake imagery. However, in the image forensics field, researchers often develop very complicated deep learning architectures to train the model. This training process is expensive, and the model size is often huge, which limits the usability of the model. This research focuses on the realism of state-of-the-art image manipulations and how difficult it is to detect them automatically or by humans. We built a machine learning model called G-JOB GAN, based on Generative Adversarial Networks (GAN), that can generate state-of-the-art, realistic-looking images with improved resolution and quality. Our model can detect a realistically generated image with an accuracy of 95.7%. Our near future aim is to implement a system that can detect fake images with a probability of odds of 1- P, where P is the chance of identical fingerprints. To achieve this objective, we have implemented and evaluated various GAN architectures such as Style GAN, Pro GAN, and the Original GAN.","PeriodicalId":447007,"journal":{"name":"2023 8th International Conference on Automation, Control and Robotics Engineering (CACRE)","volume":"39 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 8th International Conference on Automation, Control and Robotics Engineering (CACRE)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CACRE58689.2023.10208613","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
In recent years, the widespread use of smartphones and social media has led to a surge in the amount of digital content available. However, this increase in the use of digital images has also led to a rise in the use of techniques to alter image contents. Therefore, it is essential for both the image forensics field and the general public to be able to differentiate between genuine or authentic images and manipulated or fake imagery. Deep learning has made it easier to create unreal images, which underscores the need to establish a more robust platform to detect real from fake imagery. However, in the image forensics field, researchers often develop very complicated deep learning architectures to train the model. This training process is expensive, and the model size is often huge, which limits the usability of the model. This research focuses on the realism of state-of-the-art image manipulations and how difficult it is to detect them automatically or by humans. We built a machine learning model called G-JOB GAN, based on Generative Adversarial Networks (GAN), that can generate state-of-the-art, realistic-looking images with improved resolution and quality. Our model can detect a realistically generated image with an accuracy of 95.7%. Our near future aim is to implement a system that can detect fake images with a probability of odds of 1- P, where P is the chance of identical fingerprints. To achieve this objective, we have implemented and evaluated various GAN architectures such as Style GAN, Pro GAN, and the Original GAN.