多通道信息目标语音分离的时频空特征融合

W. Zhang, Bin Lin, Li Ma, Aolong Zhou, Guoli Wu
{"title":"多通道信息目标语音分离的时频空特征融合","authors":"W. Zhang, Bin Lin, Li Ma, Aolong Zhou, Guoli Wu","doi":"10.1109/ICICSP55539.2022.10050617","DOIUrl":null,"url":null,"abstract":"Our goal is to make full use of time-frequency domain features and spatial domain features of the multichannel speech signal, and we propose an end-to-end multichannel target speech separation method based on temporal-frequency-spatial feature fusion, called the cTFS model. For the target speech separation task, the cTFS model takes the angel feature of the target speech signal as the prior knowledge, then predicts the complex ideal ratio mask target with a complex U-shaped network. We achieve the reconstruction of the target speech signal by signal approximation. Furthermore, a multi-channel target speaker separation dataset is constructed based on the WSJ0-2mix dataset based on the signal reverberation model. The performance of each target speaker separation model is evaluated on this dataset using the evaluation metrics SDR, SI-SNR, PESQ, and STOI. Experimental results show the effectiveness of the proposed method as well as the benefit of incorporating angle feature information in multichannel speech separation.","PeriodicalId":281095,"journal":{"name":"2022 5th International Conference on Information Communication and Signal Processing (ICICSP)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Temporal-Frequency-Spatial Features Fusion for Multi-channel Informed Target Speech Separation\",\"authors\":\"W. Zhang, Bin Lin, Li Ma, Aolong Zhou, Guoli Wu\",\"doi\":\"10.1109/ICICSP55539.2022.10050617\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Our goal is to make full use of time-frequency domain features and spatial domain features of the multichannel speech signal, and we propose an end-to-end multichannel target speech separation method based on temporal-frequency-spatial feature fusion, called the cTFS model. For the target speech separation task, the cTFS model takes the angel feature of the target speech signal as the prior knowledge, then predicts the complex ideal ratio mask target with a complex U-shaped network. We achieve the reconstruction of the target speech signal by signal approximation. Furthermore, a multi-channel target speaker separation dataset is constructed based on the WSJ0-2mix dataset based on the signal reverberation model. The performance of each target speaker separation model is evaluated on this dataset using the evaluation metrics SDR, SI-SNR, PESQ, and STOI. Experimental results show the effectiveness of the proposed method as well as the benefit of incorporating angle feature information in multichannel speech separation.\",\"PeriodicalId\":281095,\"journal\":{\"name\":\"2022 5th International Conference on Information Communication and Signal Processing (ICICSP)\",\"volume\":\"7 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-11-26\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 5th International Conference on Information Communication and Signal Processing (ICICSP)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICICSP55539.2022.10050617\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 5th International Conference on Information Communication and Signal Processing (ICICSP)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICICSP55539.2022.10050617","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

我们的目标是充分利用多通道语音信号的时频域特征和空间特征,提出了一种基于时频空间特征融合的端到端多通道目标语音分离方法,称为cTFS模型。对于目标语音分离任务,cTFS模型以目标语音信号的角度特征作为先验知识,利用复杂u型网络预测复杂理想比掩模目标。通过信号逼近实现了目标语音信号的重构。基于信号混响模型,在WSJ0-2mix数据集的基础上构建了多声道目标扬声器分离数据集。利用SDR、SI-SNR、PESQ和STOI等评价指标对每个目标说话人分离模型的性能进行了评价。实验结果表明了该方法的有效性,以及在多通道语音分离中加入角度特征信息的优点。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Temporal-Frequency-Spatial Features Fusion for Multi-channel Informed Target Speech Separation
Our goal is to make full use of time-frequency domain features and spatial domain features of the multichannel speech signal, and we propose an end-to-end multichannel target speech separation method based on temporal-frequency-spatial feature fusion, called the cTFS model. For the target speech separation task, the cTFS model takes the angel feature of the target speech signal as the prior knowledge, then predicts the complex ideal ratio mask target with a complex U-shaped network. We achieve the reconstruction of the target speech signal by signal approximation. Furthermore, a multi-channel target speaker separation dataset is constructed based on the WSJ0-2mix dataset based on the signal reverberation model. The performance of each target speaker separation model is evaluated on this dataset using the evaluation metrics SDR, SI-SNR, PESQ, and STOI. Experimental results show the effectiveness of the proposed method as well as the benefit of incorporating angle feature information in multichannel speech separation.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Waveform Design and Processing for Joint Detection and Communication Based on MIMO Sonar Systems Joint Angle and Range Estimation with FDA-MIMO Radar in Unknown Mutual Coupling Acoustic Scene Classification for Bone-Conducted Sound Using Transfer Learning and Feature Fusion A Novel Machine Learning Algorithm: Music Arrangement and Timbre Transfer System An Element Selection Enhanced Hybrid Relay-RIS Assisted Communication System
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1