Karn N. Watcharasupat;Chih-Wei Wu;Yiwei Ding;Iroro Orife;Aaron J. Hipple;Phillip A. Williams;Scott Kramer;Alexander Lerch;William Wolcott
{"title":"用于电影音源分离的广义分带神经网络","authors":"Karn N. Watcharasupat;Chih-Wei Wu;Yiwei Ding;Iroro Orife;Aaron J. Hipple;Phillip A. Williams;Scott Kramer;Alexander Lerch;William Wolcott","doi":"10.1109/OJSP.2023.3339428","DOIUrl":null,"url":null,"abstract":"Cinematic audio source separation is a relatively new subtask of audio source separation, with the aim of extracting the dialogue, music, and effects stems from their mixture. In this work, we developed a model generalizing the Bandsplit RNN for any complete or overcomplete partitions of the frequency axis. Psychoacoustically motivated frequency scales were used to inform the band definitions which are now defined with redundancy for more reliable feature extraction. A loss function motivated by the signal-to-noise ratio and the sparsity-promoting property of the 1-norm was proposed. We additionally exploit the information-sharing property of a common-encoder setup to reduce computational complexity during both training and inference, improve separation performance for hard-to-generalize classes of sounds, and allow flexibility during inference time with detachable decoders. Our best model sets the state of the art on the Divide and Remaster dataset with performance above the ideal ratio mask for the dialogue stem.","PeriodicalId":73300,"journal":{"name":"IEEE open journal of signal processing","volume":"5 ","pages":"73-81"},"PeriodicalIF":2.9000,"publicationDate":"2023-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10342812","citationCount":"0","resultStr":"{\"title\":\"A Generalized Bandsplit Neural Network for Cinematic Audio Source Separation\",\"authors\":\"Karn N. Watcharasupat;Chih-Wei Wu;Yiwei Ding;Iroro Orife;Aaron J. Hipple;Phillip A. Williams;Scott Kramer;Alexander Lerch;William Wolcott\",\"doi\":\"10.1109/OJSP.2023.3339428\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Cinematic audio source separation is a relatively new subtask of audio source separation, with the aim of extracting the dialogue, music, and effects stems from their mixture. In this work, we developed a model generalizing the Bandsplit RNN for any complete or overcomplete partitions of the frequency axis. Psychoacoustically motivated frequency scales were used to inform the band definitions which are now defined with redundancy for more reliable feature extraction. A loss function motivated by the signal-to-noise ratio and the sparsity-promoting property of the 1-norm was proposed. We additionally exploit the information-sharing property of a common-encoder setup to reduce computational complexity during both training and inference, improve separation performance for hard-to-generalize classes of sounds, and allow flexibility during inference time with detachable decoders. Our best model sets the state of the art on the Divide and Remaster dataset with performance above the ideal ratio mask for the dialogue stem.\",\"PeriodicalId\":73300,\"journal\":{\"name\":\"IEEE open journal of signal processing\",\"volume\":\"5 \",\"pages\":\"73-81\"},\"PeriodicalIF\":2.9000,\"publicationDate\":\"2023-12-05\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10342812\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE open journal of signal processing\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10342812/\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"ENGINEERING, ELECTRICAL & ELECTRONIC\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE open journal of signal processing","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10342812/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0
摘要
电影音源分离是音源分离中一个相对较新的子任务,其目的是从混合物中提取对白、音乐和特效。在这项工作中,我们开发了一个模型,对频带分割 RNN 进行了概括,适用于任何完整或过度完整的频轴分割。以心理声学为动机的频率标度被用来为频段定义提供信息,而现在的频段定义具有冗余性,可以进行更可靠的特征提取。我们提出了一个由信噪比和 1-norm 的稀疏性促进特性激发的损失函数。此外,我们还利用共同编码器设置的信息共享特性,降低了训练和推理过程中的计算复杂度,提高了难以归纳的声音类别的分离性能,并允许在推理过程中使用可拆卸解码器的灵活性。我们的最佳模型在 "Divide and Remaster "数据集上树立了技术典范,其性能高于对话干的理想比率掩码。
A Generalized Bandsplit Neural Network for Cinematic Audio Source Separation
Cinematic audio source separation is a relatively new subtask of audio source separation, with the aim of extracting the dialogue, music, and effects stems from their mixture. In this work, we developed a model generalizing the Bandsplit RNN for any complete or overcomplete partitions of the frequency axis. Psychoacoustically motivated frequency scales were used to inform the band definitions which are now defined with redundancy for more reliable feature extraction. A loss function motivated by the signal-to-noise ratio and the sparsity-promoting property of the 1-norm was proposed. We additionally exploit the information-sharing property of a common-encoder setup to reduce computational complexity during both training and inference, improve separation performance for hard-to-generalize classes of sounds, and allow flexibility during inference time with detachable decoders. Our best model sets the state of the art on the Divide and Remaster dataset with performance above the ideal ratio mask for the dialogue stem.