MH-FFNet:利用中高频信息进行鲁棒的细粒度人脸伪造检测

IF 7.5 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Expert Systems with Applications Pub Date : 2025-06-01 Epub Date: 2025-03-10 DOI:10.1016/j.eswa.2025.127108
Kai Zhou , Guanglu Sun , Jun Wang , Linsen Yu , Tianlin Li
{"title":"MH-FFNet:利用中高频信息进行鲁棒的细粒度人脸伪造检测","authors":"Kai Zhou ,&nbsp;Guanglu Sun ,&nbsp;Jun Wang ,&nbsp;Linsen Yu ,&nbsp;Tianlin Li","doi":"10.1016/j.eswa.2025.127108","DOIUrl":null,"url":null,"abstract":"<div><div>The rapid advancement of Deepfake technology has rendered the generation of forged faces highly realistic, while simultaneously introducing significant societal security concerns. The accurate detection of forged facial images has thus emerged as an urgent issue and a formidable challenge. In this paper, we approach face forgery detection as a fine-grained classification problem due to the subtle differences between real and fake faces. We propose a detection framework termed the Mid-High Frequency Based Fine-Grained Network (MH-FFNet), which enhances the detection of forged faces by leveraging mid- and high-frequency information to capture fine-grained forgery cues. To better extract and utilize these cues, we devise two fine-grained feature enhancement modules: the Patch-based Fine-Grained Enhancement Module (P-FGEM) and the Feature-based Fine-Grained Enhancement Module (F-FGEM). The P-FGEM module focuses on extracting mid- and high-frequency information from shallow feature blocks, enhancing forgery representations in shallow features. This design effectively mitigates the loss of mid- and high-frequency cues as the network deepens, thereby improving the algorithm’s sensitivity to forgery cues. In contrast, the F-FGEM module captures mid- and high-frequency information from mid-level global features, further enriching forgery representations in these features and significantly enhancing their discriminative power. Experimental results indicate that our proposed method achieves an AUC of 99.44% on the FF++ (C23) dataset and 83.44% on the Celeb-DF (V2) dataset, demonstrating the algorithm’s superior detection capability and generalization performance. Additionally, we conduct experiments to comprehensively illustrate the robustness of the algorithm against common image post-processing attacks.</div></div>","PeriodicalId":50461,"journal":{"name":"Expert Systems with Applications","volume":"276 ","pages":"Article 127108"},"PeriodicalIF":7.5000,"publicationDate":"2025-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"MH-FFNet: Leveraging mid-high frequency information for robust fine-grained face forgery detection\",\"authors\":\"Kai Zhou ,&nbsp;Guanglu Sun ,&nbsp;Jun Wang ,&nbsp;Linsen Yu ,&nbsp;Tianlin Li\",\"doi\":\"10.1016/j.eswa.2025.127108\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>The rapid advancement of Deepfake technology has rendered the generation of forged faces highly realistic, while simultaneously introducing significant societal security concerns. The accurate detection of forged facial images has thus emerged as an urgent issue and a formidable challenge. In this paper, we approach face forgery detection as a fine-grained classification problem due to the subtle differences between real and fake faces. We propose a detection framework termed the Mid-High Frequency Based Fine-Grained Network (MH-FFNet), which enhances the detection of forged faces by leveraging mid- and high-frequency information to capture fine-grained forgery cues. To better extract and utilize these cues, we devise two fine-grained feature enhancement modules: the Patch-based Fine-Grained Enhancement Module (P-FGEM) and the Feature-based Fine-Grained Enhancement Module (F-FGEM). The P-FGEM module focuses on extracting mid- and high-frequency information from shallow feature blocks, enhancing forgery representations in shallow features. This design effectively mitigates the loss of mid- and high-frequency cues as the network deepens, thereby improving the algorithm’s sensitivity to forgery cues. In contrast, the F-FGEM module captures mid- and high-frequency information from mid-level global features, further enriching forgery representations in these features and significantly enhancing their discriminative power. Experimental results indicate that our proposed method achieves an AUC of 99.44% on the FF++ (C23) dataset and 83.44% on the Celeb-DF (V2) dataset, demonstrating the algorithm’s superior detection capability and generalization performance. Additionally, we conduct experiments to comprehensively illustrate the robustness of the algorithm against common image post-processing attacks.</div></div>\",\"PeriodicalId\":50461,\"journal\":{\"name\":\"Expert Systems with Applications\",\"volume\":\"276 \",\"pages\":\"Article 127108\"},\"PeriodicalIF\":7.5000,\"publicationDate\":\"2025-06-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Expert Systems with Applications\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0957417425007304\",\"RegionNum\":1,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"2025/3/10 0:00:00\",\"PubModel\":\"Epub\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Expert Systems with Applications","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0957417425007304","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/3/10 0:00:00","PubModel":"Epub","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

摘要

Deepfake技术的快速发展使得伪造人脸的生成高度逼真,同时也带来了重大的社会安全问题。人脸伪造图像的准确检测已成为一个迫切需要解决的问题和艰巨的挑战。在本文中,我们将人脸伪造检测作为一个细粒度的分类问题来处理,因为真实人脸和假人脸之间存在细微的差异。我们提出了一种检测框架,称为基于中高频的细粒度网络(MH-FFNet),它通过利用中高频信息捕获细粒度伪造线索来增强伪造人脸的检测。为了更好地提取和利用这些线索,我们设计了两个细粒度特征增强模块:基于补丁的细粒度增强模块(P-FGEM)和基于特征的细粒度增强模块(F-FGEM)。P-FGEM模块侧重于从浅层特征块中提取中频和高频信息,增强浅层特征的伪造表征。该设计有效地减轻了随着网络深度加深而造成的中高频信号的丢失,从而提高了算法对伪造信号的敏感性。相比之下,F-FGEM模块从中级全局特征中捕获中高频信息,进一步丰富了这些特征中的伪造表示,并显着提高了它们的鉴别能力。实验结果表明,该方法在FF++ (C23)数据集和Celeb-DF (V2)数据集上的AUC分别达到99.44%和83.44%,证明了该算法具有良好的检测能力和泛化性能。此外,我们进行了实验,以全面说明该算法对常见图像后处理攻击的鲁棒性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
MH-FFNet: Leveraging mid-high frequency information for robust fine-grained face forgery detection
The rapid advancement of Deepfake technology has rendered the generation of forged faces highly realistic, while simultaneously introducing significant societal security concerns. The accurate detection of forged facial images has thus emerged as an urgent issue and a formidable challenge. In this paper, we approach face forgery detection as a fine-grained classification problem due to the subtle differences between real and fake faces. We propose a detection framework termed the Mid-High Frequency Based Fine-Grained Network (MH-FFNet), which enhances the detection of forged faces by leveraging mid- and high-frequency information to capture fine-grained forgery cues. To better extract and utilize these cues, we devise two fine-grained feature enhancement modules: the Patch-based Fine-Grained Enhancement Module (P-FGEM) and the Feature-based Fine-Grained Enhancement Module (F-FGEM). The P-FGEM module focuses on extracting mid- and high-frequency information from shallow feature blocks, enhancing forgery representations in shallow features. This design effectively mitigates the loss of mid- and high-frequency cues as the network deepens, thereby improving the algorithm’s sensitivity to forgery cues. In contrast, the F-FGEM module captures mid- and high-frequency information from mid-level global features, further enriching forgery representations in these features and significantly enhancing their discriminative power. Experimental results indicate that our proposed method achieves an AUC of 99.44% on the FF++ (C23) dataset and 83.44% on the Celeb-DF (V2) dataset, demonstrating the algorithm’s superior detection capability and generalization performance. Additionally, we conduct experiments to comprehensively illustrate the robustness of the algorithm against common image post-processing attacks.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Expert Systems with Applications
Expert Systems with Applications 工程技术-工程:电子与电气
CiteScore
13.80
自引率
10.60%
发文量
2045
审稿时长
8.7 months
期刊介绍: Expert Systems With Applications is an international journal dedicated to the exchange of information on expert and intelligent systems used globally in industry, government, and universities. The journal emphasizes original papers covering the design, development, testing, implementation, and management of these systems, offering practical guidelines. It spans various sectors such as finance, engineering, marketing, law, project management, information management, medicine, and more. The journal also welcomes papers on multi-agent systems, knowledge management, neural networks, knowledge discovery, data mining, and other related areas, excluding applications to military/defense systems.
期刊最新文献
The cold chain vehicle routing problem in omni-channel retailing A hybrid CNN–Transformer model for pedestrian attribute recognition with multiscale features and Class-Specific residual attention Multi-sensory emotion modulation for alleviation of driver anger: a physiological and behavioural study DPSSNet: a dual-path state space architecture for resolving the context-detail dilemma in high-resolution mangrove mapping Task acceptance and scheduling in cloud manufacturing considering uncertainty in release time and processing time
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1