Improving separation of overlapped speech for meeting conversations using uncalibrated microphone array

Keisuke Nakamura, R. Gomez
{"title":"Improving separation of overlapped speech for meeting conversations using uncalibrated microphone array","authors":"Keisuke Nakamura, R. Gomez","doi":"10.1109/ASRU.2017.8268916","DOIUrl":null,"url":null,"abstract":"In this paper, we propose a novel approach of sound source separation for meeting conversations even when using an uncalibrated microphone array. Our method can blindly estimate three parameters for separation, namely Steering Vectors (SVs), speaker indices, and activity periods of each speaker. First, we estimate the number of speakers and SVs by clustering Time Delay Of Arrival (TDOA) of the observed signal and selecting major clusters to compute TDOA-based SVs. Then, speaker indices and activity periods are estimated by thresholding spatial spectrum using estimated SVs, whose threshold is blindly obtained. Finally, we separate overlapped speeches/noise based on dynamic design of noise correlation matrices of the minimum variance distortionless response (MVDR) beamformer using blindly estimated parameters. The proposed algorithm was evaluated in both separation objective measure and recognition correct rate and showed improvements in both single and simultaneous speech scenarios in a reverberant meeting room. Moreover, the blindly estimated parameters improved separation and recognition compared to geometrically obtained parameters.","PeriodicalId":290868,"journal":{"name":"2017 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2017-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2017 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ASRU.2017.8268916","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2

Abstract

In this paper, we propose a novel approach of sound source separation for meeting conversations even when using an uncalibrated microphone array. Our method can blindly estimate three parameters for separation, namely Steering Vectors (SVs), speaker indices, and activity periods of each speaker. First, we estimate the number of speakers and SVs by clustering Time Delay Of Arrival (TDOA) of the observed signal and selecting major clusters to compute TDOA-based SVs. Then, speaker indices and activity periods are estimated by thresholding spatial spectrum using estimated SVs, whose threshold is blindly obtained. Finally, we separate overlapped speeches/noise based on dynamic design of noise correlation matrices of the minimum variance distortionless response (MVDR) beamformer using blindly estimated parameters. The proposed algorithm was evaluated in both separation objective measure and recognition correct rate and showed improvements in both single and simultaneous speech scenarios in a reverberant meeting room. Moreover, the blindly estimated parameters improved separation and recognition compared to geometrically obtained parameters.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
改进使用未校准麦克风阵列的会议对话中重叠语音的分离
在本文中,我们提出了一种新的声源分离方法,用于会议对话,即使使用未校准的麦克风阵列。我们的方法可以盲目估计三个参数,即转向向量(SVs)、说话人指数和每个说话人的活动周期。首先,我们通过对观测信号的到达时间延迟(TDOA)进行聚类,并选择主要聚类计算基于TDOA的SVs来估计扬声器和SVs的数量。然后,利用估计的SVs对空间谱进行阈值化,盲取阈值,估计说话人指数和活动周期;最后,利用盲估计参数对最小方差无失真响应波束形成器的噪声相关矩阵进行动态设计,实现了重叠语音/噪声的分离。该算法在分离目标度量和识别正确率两方面进行了评价,在混响会议室的单语音和同步语音场景下均有改进。此外,与几何获取的参数相比,盲目估计的参数提高了分离和识别。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Scalable multi-domain dialogue state tracking Topic segmentation in ASR transcripts using bidirectional RNNS for change detection Consistent DNN uncertainty training and decoding for robust ASR Cracking the cocktail party problem by multi-beam deep attractor network ONENET: Joint domain, intent, slot prediction for spoken language understanding
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1