Yuang Li, Min Zhang, Mengxin Ren, Miaomiao Ma, Daimeng Wei, Hao Yang
{"title":"跨域音频深度伪造检测:数据集与分析","authors":"Yuang Li, Min Zhang, Mengxin Ren, Miaomiao Ma, Daimeng Wei, Hao Yang","doi":"arxiv-2404.04904","DOIUrl":null,"url":null,"abstract":"Audio deepfake detection (ADD) is essential for preventing the misuse of\nsynthetic voices that may infringe on personal rights and privacy. Recent\nzero-shot text-to-speech (TTS) models pose higher risks as they can clone\nvoices with a single utterance. However, the existing ADD datasets are\noutdated, leading to suboptimal generalization of detection models. In this\npaper, we construct a new cross-domain ADD dataset comprising over 300 hours of\nspeech data that is generated by five advanced zero-shot TTS models. To\nsimulate real-world scenarios, we employ diverse attack methods and audio\nprompts from different datasets. Experiments show that, through novel\nattack-augmented training, the Wav2Vec2-large and Whisper-medium models achieve\nequal error rates of 4.1\\% and 6.5\\% respectively. Additionally, we demonstrate\nour models' outstanding few-shot ADD ability by fine-tuning with just one\nminute of target-domain data. Nonetheless, neural codec compressors greatly\naffect the detection accuracy, necessitating further research.","PeriodicalId":501178,"journal":{"name":"arXiv - CS - Sound","volume":"65 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-04-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Cross-Domain Audio Deepfake Detection: Dataset and Analysis\",\"authors\":\"Yuang Li, Min Zhang, Mengxin Ren, Miaomiao Ma, Daimeng Wei, Hao Yang\",\"doi\":\"arxiv-2404.04904\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Audio deepfake detection (ADD) is essential for preventing the misuse of\\nsynthetic voices that may infringe on personal rights and privacy. Recent\\nzero-shot text-to-speech (TTS) models pose higher risks as they can clone\\nvoices with a single utterance. However, the existing ADD datasets are\\noutdated, leading to suboptimal generalization of detection models. In this\\npaper, we construct a new cross-domain ADD dataset comprising over 300 hours of\\nspeech data that is generated by five advanced zero-shot TTS models. To\\nsimulate real-world scenarios, we employ diverse attack methods and audio\\nprompts from different datasets. Experiments show that, through novel\\nattack-augmented training, the Wav2Vec2-large and Whisper-medium models achieve\\nequal error rates of 4.1\\\\% and 6.5\\\\% respectively. Additionally, we demonstrate\\nour models' outstanding few-shot ADD ability by fine-tuning with just one\\nminute of target-domain data. Nonetheless, neural codec compressors greatly\\naffect the detection accuracy, necessitating further research.\",\"PeriodicalId\":501178,\"journal\":{\"name\":\"arXiv - CS - Sound\",\"volume\":\"65 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-04-07\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - CS - Sound\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2404.04904\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Sound","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2404.04904","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Cross-Domain Audio Deepfake Detection: Dataset and Analysis
Audio deepfake detection (ADD) is essential for preventing the misuse of
synthetic voices that may infringe on personal rights and privacy. Recent
zero-shot text-to-speech (TTS) models pose higher risks as they can clone
voices with a single utterance. However, the existing ADD datasets are
outdated, leading to suboptimal generalization of detection models. In this
paper, we construct a new cross-domain ADD dataset comprising over 300 hours of
speech data that is generated by five advanced zero-shot TTS models. To
simulate real-world scenarios, we employ diverse attack methods and audio
prompts from different datasets. Experiments show that, through novel
attack-augmented training, the Wav2Vec2-large and Whisper-medium models achieve
equal error rates of 4.1\% and 6.5\% respectively. Additionally, we demonstrate
our models' outstanding few-shot ADD ability by fine-tuning with just one
minute of target-domain data. Nonetheless, neural codec compressors greatly
affect the detection accuracy, necessitating further research.