Hierarchical multi-source cues fusion for mono-to-binaural based Audio Deepfake Detection

IF 15.5 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Information Fusion Pub Date : 2025-08-01 Epub Date: 2025-03-14 DOI:10.1016/j.inffus.2025.103097
Rui Liu , Jinhua Zhang , Haizhou Li
{"title":"Hierarchical multi-source cues fusion for mono-to-binaural based Audio Deepfake Detection","authors":"Rui Liu ,&nbsp;Jinhua Zhang ,&nbsp;Haizhou Li","doi":"10.1016/j.inffus.2025.103097","DOIUrl":null,"url":null,"abstract":"<div><div>Audio Deepfake Detection (ADD) targets identifying forgery cues in audio generated by text-to-speech (TTS), voice conversion (VC), voice editing, etc. With the advancement of generative artificial intelligence(AI), ADD has gained increasing attention. In recent years, mono-to-binaural (M2B) conversion has been explored in ADD to uncover forgery cues from a novel perspective. However, M2B-based methods may weaken or overlook unique forgery cues specific to mono, limiting detection performance. To this end, this paper proposes a <strong>H</strong>ierarchical <strong>M</strong>ulti-<strong>S</strong>ource <strong>C</strong>ues <strong>F</strong>usion network for more accurate <strong>ADD (HMSCF-ADD)</strong>. This approach leverages mono alongside binaural left and right channels as three distinct sources for hierarchical information fusion, it distinguishes common and binaural-specific features while removing redundant information for more effective detection. Specifically, binaural-specific and common features are first extracted and fused as binaural information, followed by dynamic fusion of mono and binaural information to achieve hierarchical fusion. Experiments on ASVspoof2019-LA and ASVspoof2021-PA datasets demonstrate that HMSCF-ADD outperforms all mono-input and M2B-based baselines. Detailed comparisons on fusion strategies and M2B conversion further validate the framework’s effectiveness. The codes are available at: <span><span>https://github.com/AI-S2-Lab/HMSCF-ADD</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50367,"journal":{"name":"Information Fusion","volume":"120 ","pages":"Article 103097"},"PeriodicalIF":15.5000,"publicationDate":"2025-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Information Fusion","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1566253525001708","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/3/14 0:00:00","PubModel":"Epub","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

Audio Deepfake Detection (ADD) targets identifying forgery cues in audio generated by text-to-speech (TTS), voice conversion (VC), voice editing, etc. With the advancement of generative artificial intelligence(AI), ADD has gained increasing attention. In recent years, mono-to-binaural (M2B) conversion has been explored in ADD to uncover forgery cues from a novel perspective. However, M2B-based methods may weaken or overlook unique forgery cues specific to mono, limiting detection performance. To this end, this paper proposes a Hierarchical Multi-Source Cues Fusion network for more accurate ADD (HMSCF-ADD). This approach leverages mono alongside binaural left and right channels as three distinct sources for hierarchical information fusion, it distinguishes common and binaural-specific features while removing redundant information for more effective detection. Specifically, binaural-specific and common features are first extracted and fused as binaural information, followed by dynamic fusion of mono and binaural information to achieve hierarchical fusion. Experiments on ASVspoof2019-LA and ASVspoof2021-PA datasets demonstrate that HMSCF-ADD outperforms all mono-input and M2B-based baselines. Detailed comparisons on fusion strategies and M2B conversion further validate the framework’s effectiveness. The codes are available at: https://github.com/AI-S2-Lab/HMSCF-ADD.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
基于层次多源线索融合的单双耳音频深度伪造检测
音频深度伪造检测(ADD)的目标是识别由文本到语音(TTS)、语音转换(VC)、语音编辑等产生的音频中的伪造线索。随着生成式人工智能(AI)的发展,注意力缺失症(ADD)越来越受到人们的关注。近年来,单双耳(M2B)转换已被探索在ADD从一个新的角度发现伪造线索。然而,基于m2b的方法可能会削弱或忽略mono特有的伪造线索,从而限制了检测性能。为此,本文提出了一种分层多源线索融合网络(HMSCF-ADD),用于更精确的ADD。该方法利用单声道、双耳左右声道作为分层信息融合的三个不同来源,它区分了常见和双耳特定的特征,同时去除冗余信息,以实现更有效的检测。具体而言,首先提取双耳特定特征和双耳共同特征并融合为双耳信息,然后对单耳和双耳信息进行动态融合,实现分层融合。在asvspof2019 - la和asvspof2021 - pa数据集上的实验表明,HMSCF-ADD优于所有单输入和基于m2b的基线。通过对融合策略和M2B转换的详细比较,进一步验证了该框架的有效性。代码可在https://github.com/AI-S2-Lab/HMSCF-ADD上获得。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
Information Fusion
Information Fusion 工程技术-计算机:理论方法
CiteScore
33.20
自引率
4.30%
发文量
161
审稿时长
7.9 months
期刊介绍: Information Fusion serves as a central platform for showcasing advancements in multi-sensor, multi-source, multi-process information fusion, fostering collaboration among diverse disciplines driving its progress. It is the leading outlet for sharing research and development in this field, focusing on architectures, algorithms, and applications. Papers dealing with fundamental theoretical analyses as well as those demonstrating their application to real-world problems will be welcome.
期刊最新文献
(a, b)-FG-functionals: a generalization of the Sugeno integral with floating domains in arbitrary closed real intervals and its applications FedCLIPOT: Federated CLIP model via parameter reusing and optimal transport Multimodal semantic-scale network for remote sensing image classification GTEE: A global timestamp encoding enhanced method for robust time series imputation in complex missing scenarios Resilient distributed Kalman filtering for cyber-physical systems via mean subsequence reduction
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1