{"title":"Hierarchical multi-source cues fusion for mono-to-binaural based Audio Deepfake Detection","authors":"Rui Liu , Jinhua Zhang , Haizhou Li","doi":"10.1016/j.inffus.2025.103097","DOIUrl":null,"url":null,"abstract":"<div><div>Audio Deepfake Detection (ADD) targets identifying forgery cues in audio generated by text-to-speech (TTS), voice conversion (VC), voice editing, etc. With the advancement of generative artificial intelligence(AI), ADD has gained increasing attention. In recent years, mono-to-binaural (M2B) conversion has been explored in ADD to uncover forgery cues from a novel perspective. However, M2B-based methods may weaken or overlook unique forgery cues specific to mono, limiting detection performance. To this end, this paper proposes a <strong>H</strong>ierarchical <strong>M</strong>ulti-<strong>S</strong>ource <strong>C</strong>ues <strong>F</strong>usion network for more accurate <strong>ADD (HMSCF-ADD)</strong>. This approach leverages mono alongside binaural left and right channels as three distinct sources for hierarchical information fusion, it distinguishes common and binaural-specific features while removing redundant information for more effective detection. Specifically, binaural-specific and common features are first extracted and fused as binaural information, followed by dynamic fusion of mono and binaural information to achieve hierarchical fusion. Experiments on ASVspoof2019-LA and ASVspoof2021-PA datasets demonstrate that HMSCF-ADD outperforms all mono-input and M2B-based baselines. Detailed comparisons on fusion strategies and M2B conversion further validate the framework’s effectiveness. The codes are available at: <span><span>https://github.com/AI-S2-Lab/HMSCF-ADD</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50367,"journal":{"name":"Information Fusion","volume":"120 ","pages":"Article 103097"},"PeriodicalIF":14.7000,"publicationDate":"2025-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Information Fusion","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1566253525001708","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Audio Deepfake Detection (ADD) targets identifying forgery cues in audio generated by text-to-speech (TTS), voice conversion (VC), voice editing, etc. With the advancement of generative artificial intelligence(AI), ADD has gained increasing attention. In recent years, mono-to-binaural (M2B) conversion has been explored in ADD to uncover forgery cues from a novel perspective. However, M2B-based methods may weaken or overlook unique forgery cues specific to mono, limiting detection performance. To this end, this paper proposes a Hierarchical Multi-Source Cues Fusion network for more accurate ADD (HMSCF-ADD). This approach leverages mono alongside binaural left and right channels as three distinct sources for hierarchical information fusion, it distinguishes common and binaural-specific features while removing redundant information for more effective detection. Specifically, binaural-specific and common features are first extracted and fused as binaural information, followed by dynamic fusion of mono and binaural information to achieve hierarchical fusion. Experiments on ASVspoof2019-LA and ASVspoof2021-PA datasets demonstrate that HMSCF-ADD outperforms all mono-input and M2B-based baselines. Detailed comparisons on fusion strategies and M2B conversion further validate the framework’s effectiveness. The codes are available at: https://github.com/AI-S2-Lab/HMSCF-ADD.
期刊介绍:
Information Fusion serves as a central platform for showcasing advancements in multi-sensor, multi-source, multi-process information fusion, fostering collaboration among diverse disciplines driving its progress. It is the leading outlet for sharing research and development in this field, focusing on architectures, algorithms, and applications. Papers dealing with fundamental theoretical analyses as well as those demonstrating their application to real-world problems will be welcome.