Inzamamul Alam, Muhammad Shahid Muneer, Simon S. Woo
{"title":"UGAD: 利用频率指纹的通用生成式人工智能探测器","authors":"Inzamamul Alam, Muhammad Shahid Muneer, Simon S. Woo","doi":"arxiv-2409.07913","DOIUrl":null,"url":null,"abstract":"In the wake of a fabricated explosion image at the Pentagon, an ability to\ndiscern real images from fake counterparts has never been more critical. Our\nstudy introduces a novel multi-modal approach to detect AI-generated images\namidst the proliferation of new generation methods such as Diffusion models.\nOur method, UGAD, encompasses three key detection steps: First, we transform\nthe RGB images into YCbCr channels and apply an Integral Radial Operation to\nemphasize salient radial features. Secondly, the Spatial Fourier Extraction\noperation is used for a spatial shift, utilizing a pre-trained deep learning\nnetwork for optimal feature extraction. Finally, the deep neural network\nclassification stage processes the data through dense layers using softmax for\nclassification. Our approach significantly enhances the accuracy of\ndifferentiating between real and AI-generated images, as evidenced by a 12.64%\nincrease in accuracy and 28.43% increase in AUC compared to existing\nstate-of-the-art methods.","PeriodicalId":501130,"journal":{"name":"arXiv - CS - Computer Vision and Pattern Recognition","volume":"8 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"UGAD: Universal Generative AI Detector utilizing Frequency Fingerprints\",\"authors\":\"Inzamamul Alam, Muhammad Shahid Muneer, Simon S. Woo\",\"doi\":\"arxiv-2409.07913\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In the wake of a fabricated explosion image at the Pentagon, an ability to\\ndiscern real images from fake counterparts has never been more critical. Our\\nstudy introduces a novel multi-modal approach to detect AI-generated images\\namidst the proliferation of new generation methods such as Diffusion models.\\nOur method, UGAD, encompasses three key detection steps: First, we transform\\nthe RGB images into YCbCr channels and apply an Integral Radial Operation to\\nemphasize salient radial features. Secondly, the Spatial Fourier Extraction\\noperation is used for a spatial shift, utilizing a pre-trained deep learning\\nnetwork for optimal feature extraction. Finally, the deep neural network\\nclassification stage processes the data through dense layers using softmax for\\nclassification. Our approach significantly enhances the accuracy of\\ndifferentiating between real and AI-generated images, as evidenced by a 12.64%\\nincrease in accuracy and 28.43% increase in AUC compared to existing\\nstate-of-the-art methods.\",\"PeriodicalId\":501130,\"journal\":{\"name\":\"arXiv - CS - Computer Vision and Pattern Recognition\",\"volume\":\"8 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-09-12\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - CS - Computer Vision and Pattern Recognition\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2409.07913\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Computer Vision and Pattern Recognition","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.07913","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
UGAD: Universal Generative AI Detector utilizing Frequency Fingerprints
In the wake of a fabricated explosion image at the Pentagon, an ability to
discern real images from fake counterparts has never been more critical. Our
study introduces a novel multi-modal approach to detect AI-generated images
amidst the proliferation of new generation methods such as Diffusion models.
Our method, UGAD, encompasses three key detection steps: First, we transform
the RGB images into YCbCr channels and apply an Integral Radial Operation to
emphasize salient radial features. Secondly, the Spatial Fourier Extraction
operation is used for a spatial shift, utilizing a pre-trained deep learning
network for optimal feature extraction. Finally, the deep neural network
classification stage processes the data through dense layers using softmax for
classification. Our approach significantly enhances the accuracy of
differentiating between real and AI-generated images, as evidenced by a 12.64%
increase in accuracy and 28.43% increase in AUC compared to existing
state-of-the-art methods.