未知多说话人的多通道语音分离系统

Chao Peng, Yiwen Wang, Xihong Wu, T. Qu
{"title":"未知多说话人的多通道语音分离系统","authors":"Chao Peng, Yiwen Wang, Xihong Wu, T. Qu","doi":"10.1109/ICICSP55539.2022.10050619","DOIUrl":null,"url":null,"abstract":"This paper presents a multi-channel speech separation system for an unknown number of speakers. It can be applied to cases with a different number of speakers using a single model by iterative speech separation based on beam signal. It first determines the spatial directions where speakers are located (Direction of Arrival, DOA), and then the beam signals in each direction are obtained with spectral features, spatial features, and directional features by deep neural networks. Finally, the iterative speech separation is performed on the basis of the beam signals. Experimental evaluations show that the proposed method is better than the multi-channel Permutation Invariant Training (PIT) and Deep Clustering (DPCL) for an unknown number of speakers and the one-and-rest speech separation method. Besides, the system can still keep a relatively good separation performance even though the number of speakers is enlarged to 9.","PeriodicalId":281095,"journal":{"name":"2022 5th International Conference on Information Communication and Signal Processing (ICICSP)","volume":"60 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"A Multi-channel Speech Separation System for Unknown Number of Multiple Speakers\",\"authors\":\"Chao Peng, Yiwen Wang, Xihong Wu, T. Qu\",\"doi\":\"10.1109/ICICSP55539.2022.10050619\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"This paper presents a multi-channel speech separation system for an unknown number of speakers. It can be applied to cases with a different number of speakers using a single model by iterative speech separation based on beam signal. It first determines the spatial directions where speakers are located (Direction of Arrival, DOA), and then the beam signals in each direction are obtained with spectral features, spatial features, and directional features by deep neural networks. Finally, the iterative speech separation is performed on the basis of the beam signals. Experimental evaluations show that the proposed method is better than the multi-channel Permutation Invariant Training (PIT) and Deep Clustering (DPCL) for an unknown number of speakers and the one-and-rest speech separation method. Besides, the system can still keep a relatively good separation performance even though the number of speakers is enlarged to 9.\",\"PeriodicalId\":281095,\"journal\":{\"name\":\"2022 5th International Conference on Information Communication and Signal Processing (ICICSP)\",\"volume\":\"60 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-11-26\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 5th International Conference on Information Communication and Signal Processing (ICICSP)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICICSP55539.2022.10050619\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 5th International Conference on Information Communication and Signal Processing (ICICSP)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICICSP55539.2022.10050619","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

提出了一种针对未知说话人数量的多通道语音分离系统。通过基于波束信号的迭代语音分离,可以适用于使用单个模型的不同说话人数量的情况。它首先确定扬声器所在的空间方向(Direction of Arrival, DOA),然后通过深度神经网络获得每个方向上的波束信号的频谱特征、空间特征和方向特征。最后,基于波束信号进行迭代语音分离。实验结果表明,该方法比多通道排列不变训练(PIT)和深度聚类(DPCL)的未知说话者数量和一休息语音分离方法要好。此外,当扬声器数量增加到9个时,系统仍能保持较好的分离性能。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
A Multi-channel Speech Separation System for Unknown Number of Multiple Speakers
This paper presents a multi-channel speech separation system for an unknown number of speakers. It can be applied to cases with a different number of speakers using a single model by iterative speech separation based on beam signal. It first determines the spatial directions where speakers are located (Direction of Arrival, DOA), and then the beam signals in each direction are obtained with spectral features, spatial features, and directional features by deep neural networks. Finally, the iterative speech separation is performed on the basis of the beam signals. Experimental evaluations show that the proposed method is better than the multi-channel Permutation Invariant Training (PIT) and Deep Clustering (DPCL) for an unknown number of speakers and the one-and-rest speech separation method. Besides, the system can still keep a relatively good separation performance even though the number of speakers is enlarged to 9.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Waveform Design and Processing for Joint Detection and Communication Based on MIMO Sonar Systems Joint Angle and Range Estimation with FDA-MIMO Radar in Unknown Mutual Coupling Acoustic Scene Classification for Bone-Conducted Sound Using Transfer Learning and Feature Fusion A Novel Machine Learning Algorithm: Music Arrangement and Timbre Transfer System An Element Selection Enhanced Hybrid Relay-RIS Assisted Communication System
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1