通过单通道音频估计箱体内扬声器的距离

Michael Neri, Archontis Politis, Daniel Krause, Marco Carli, Tuomas Virtanen
{"title":"通过单通道音频估计箱体内扬声器的距离","authors":"Michael Neri, Archontis Politis, Daniel Krause, Marco Carli, Tuomas Virtanen","doi":"arxiv-2403.17514","DOIUrl":null,"url":null,"abstract":"Distance estimation from audio plays a crucial role in various applications,\nsuch as acoustic scene analysis, sound source localization, and room modeling.\nMost studies predominantly center on employing a classification approach, where\ndistances are discretized into distinct categories, enabling smoother model\ntraining and achieving higher accuracy but imposing restrictions on the\nprecision of the obtained sound source position. Towards this direction, in\nthis paper we propose a novel approach for continuous distance estimation from\naudio signals using a convolutional recurrent neural network with an attention\nmodule. The attention mechanism enables the model to focus on relevant temporal\nand spectral features, enhancing its ability to capture fine-grained\ndistance-related information. To evaluate the effectiveness of our proposed\nmethod, we conduct extensive experiments using audio recordings in controlled\nenvironments with three levels of realism (synthetic room impulse response,\nmeasured response with convolved speech, and real recordings) on four datasets\n(our synthetic dataset, QMULTIMIT, VoiceHome-2, and STARSS23). Experimental\nresults show that the model achieves an absolute error of 0.11 meters in a\nnoiseless synthetic scenario. Moreover, the results showed an absolute error of\nabout 1.30 meters in the hybrid scenario. The algorithm's performance in the\nreal scenario, where unpredictable environmental factors and noise are\nprevalent, yields an absolute error of approximately 0.50 meters. For\nreproducible research purposes we make model, code, and synthetic datasets\navailable at https://github.com/michaelneri/audio-distance-estimation.","PeriodicalId":501178,"journal":{"name":"arXiv - CS - Sound","volume":"19 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Speaker Distance Estimation in Enclosures from Single-Channel Audio\",\"authors\":\"Michael Neri, Archontis Politis, Daniel Krause, Marco Carli, Tuomas Virtanen\",\"doi\":\"arxiv-2403.17514\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Distance estimation from audio plays a crucial role in various applications,\\nsuch as acoustic scene analysis, sound source localization, and room modeling.\\nMost studies predominantly center on employing a classification approach, where\\ndistances are discretized into distinct categories, enabling smoother model\\ntraining and achieving higher accuracy but imposing restrictions on the\\nprecision of the obtained sound source position. Towards this direction, in\\nthis paper we propose a novel approach for continuous distance estimation from\\naudio signals using a convolutional recurrent neural network with an attention\\nmodule. The attention mechanism enables the model to focus on relevant temporal\\nand spectral features, enhancing its ability to capture fine-grained\\ndistance-related information. To evaluate the effectiveness of our proposed\\nmethod, we conduct extensive experiments using audio recordings in controlled\\nenvironments with three levels of realism (synthetic room impulse response,\\nmeasured response with convolved speech, and real recordings) on four datasets\\n(our synthetic dataset, QMULTIMIT, VoiceHome-2, and STARSS23). Experimental\\nresults show that the model achieves an absolute error of 0.11 meters in a\\nnoiseless synthetic scenario. Moreover, the results showed an absolute error of\\nabout 1.30 meters in the hybrid scenario. The algorithm's performance in the\\nreal scenario, where unpredictable environmental factors and noise are\\nprevalent, yields an absolute error of approximately 0.50 meters. For\\nreproducible research purposes we make model, code, and synthetic datasets\\navailable at https://github.com/michaelneri/audio-distance-estimation.\",\"PeriodicalId\":501178,\"journal\":{\"name\":\"arXiv - CS - Sound\",\"volume\":\"19 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-03-26\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - CS - Sound\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2403.17514\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Sound","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2403.17514","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

从音频中进行距离估计在声学场景分析、声源定位和房间建模等各种应用中发挥着至关重要的作用。大多数研究主要集中于采用分类方法,将距离离散为不同的类别,从而使模型训练更加平滑,并获得更高的精度,但对所获得声源位置的精度施加了限制。朝着这个方向,我们在本文中提出了一种利用带有注意力模块的卷积递归神经网络从音频信号中进行连续距离估计的新方法。注意力机制使模型能够关注相关的时间和频谱特征,从而增强其捕捉细粒度距离相关信息的能力。为了评估我们提出的方法的有效性,我们在四个数据集(我们的合成数据集、QMULTIMIT、VoiceHome-2 和 STARSS23)上使用受控环境中的音频录音进行了广泛的实验,实验采用了三种真实度(合成房间脉冲响应、卷积语音的测量响应和真实录音)。实验结果表明,该模型在无声合成场景中的绝对误差为 0.11 米。此外,在混合场景中,结果显示绝对误差约为 1.30 米。在不可预测的环境因素和噪音普遍存在的实际场景中,该算法的绝对误差约为 0.50 米。为便于研究,我们在 https://github.com/michaelneri/audio-distance-estimation 网站上提供了模型、代码和合成数据集。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Speaker Distance Estimation in Enclosures from Single-Channel Audio
Distance estimation from audio plays a crucial role in various applications, such as acoustic scene analysis, sound source localization, and room modeling. Most studies predominantly center on employing a classification approach, where distances are discretized into distinct categories, enabling smoother model training and achieving higher accuracy but imposing restrictions on the precision of the obtained sound source position. Towards this direction, in this paper we propose a novel approach for continuous distance estimation from audio signals using a convolutional recurrent neural network with an attention module. The attention mechanism enables the model to focus on relevant temporal and spectral features, enhancing its ability to capture fine-grained distance-related information. To evaluate the effectiveness of our proposed method, we conduct extensive experiments using audio recordings in controlled environments with three levels of realism (synthetic room impulse response, measured response with convolved speech, and real recordings) on four datasets (our synthetic dataset, QMULTIMIT, VoiceHome-2, and STARSS23). Experimental results show that the model achieves an absolute error of 0.11 meters in a noiseless synthetic scenario. Moreover, the results showed an absolute error of about 1.30 meters in the hybrid scenario. The algorithm's performance in the real scenario, where unpredictable environmental factors and noise are prevalent, yields an absolute error of approximately 0.50 meters. For reproducible research purposes we make model, code, and synthetic datasets available at https://github.com/michaelneri/audio-distance-estimation.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Explaining Deep Learning Embeddings for Speech Emotion Recognition by Predicting Interpretable Acoustic Features ESPnet-EZ: Python-only ESPnet for Easy Fine-tuning and Integration Prevailing Research Areas for Music AI in the Era of Foundation Models Egocentric Speaker Classification in Child-Adult Dyadic Interactions: From Sensing to Computational Modeling The T05 System for The VoiceMOS Challenge 2024: Transfer Learning from Deep Image Classifier to Naturalness MOS Prediction of High-Quality Synthetic Speech
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1