首页 > 最新文献

Eurasip Journal on Audio Speech and Music Processing最新文献

英文 中文
Continuous lipreading based on acoustic temporal alignments 基于声学时序排列的连续唇语阅读
IF 2.4 3区 计算机科学 Q2 ACOUSTICS Pub Date : 2024-05-06 DOI: 10.1186/s13636-024-00345-7
David Gimeno-Gómez, Carlos-D. Martínez-Hinarejos
Visual speech recognition (VSR) is a challenging task that has received increasing interest during the last few decades. Current state of the art employs powerful end-to-end architectures based on deep learning which depend on large amounts of data and high computational resources for their estimation. We address the task of VSR for data scarcity scenarios with limited computational resources by using traditional approaches based on hidden Markov models. We present a novel learning strategy that employs information obtained from previous acoustic temporal alignments to improve the visual system performance. Furthermore, we studied multiple visual speech representations and how image resolution or frame rate affect its performance. All these experiments were conducted on the limited data VLRF corpus, a database which offers an audio-visual support to address continuous speech recognition in Spanish. The results show that our approach significantly outperforms the best results achieved on the task to date.
视觉语音识别(VSR)是一项极具挑战性的任务,在过去几十年中受到越来越多的关注。目前的技术采用了基于深度学习的强大端到端架构,这些架构的估算依赖于大量数据和高计算资源。我们通过使用基于隐马尔可夫模型的传统方法,解决了在计算资源有限的情况下数据稀缺场景下的 VSR 任务。我们提出了一种新颖的学习策略,利用从以前的声学时序排列中获得的信息来提高视觉系统的性能。此外,我们还研究了多种视觉语音表征以及图像分辨率或帧速率对其性能的影响。所有这些实验都是在数据有限的 VLRF 语料库上进行的,该语料库为西班牙语连续语音识别提供了视听支持。结果表明,我们的方法明显优于迄今为止在该任务上取得的最佳结果。
{"title":"Continuous lipreading based on acoustic temporal alignments","authors":"David Gimeno-Gómez, Carlos-D. Martínez-Hinarejos","doi":"10.1186/s13636-024-00345-7","DOIUrl":"https://doi.org/10.1186/s13636-024-00345-7","url":null,"abstract":"Visual speech recognition (VSR) is a challenging task that has received increasing interest during the last few decades. Current state of the art employs powerful end-to-end architectures based on deep learning which depend on large amounts of data and high computational resources for their estimation. We address the task of VSR for data scarcity scenarios with limited computational resources by using traditional approaches based on hidden Markov models. We present a novel learning strategy that employs information obtained from previous acoustic temporal alignments to improve the visual system performance. Furthermore, we studied multiple visual speech representations and how image resolution or frame rate affect its performance. All these experiments were conducted on the limited data VLRF corpus, a database which offers an audio-visual support to address continuous speech recognition in Spanish. The results show that our approach significantly outperforms the best results achieved on the task to date.","PeriodicalId":49202,"journal":{"name":"Eurasip Journal on Audio Speech and Music Processing","volume":"9 1","pages":""},"PeriodicalIF":2.4,"publicationDate":"2024-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140881737","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Mi-Go: tool which uses YouTube as data source for evaluating general-purpose speech recognition machine learning models Mi-Go:使用 YouTube 作为数据源评估通用语音识别机器学习模型的工具
IF 2.4 3区 计算机科学 Q2 ACOUSTICS Pub Date : 2024-05-01 DOI: 10.1186/s13636-024-00343-9
Tomasz Wojnar, Jarosław Hryszko, Adam Roman
This article introduces Mi-Go, a tool aimed at evaluating the performance and adaptability of general-purpose speech recognition machine learning models across diverse real-world scenarios. The tool leverages YouTube as a rich and continuously updated data source, accounting for multiple languages, accents, dialects, speaking styles, and audio quality levels. To demonstrate the effectiveness of the tool, an experiment was conducted, by using Mi-Go to evaluate state-of-the-art automatic speech recognition machine learning models. The evaluation involved a total of 141 randomly selected YouTube videos. The results underscore the utility of YouTube as a valuable data source for evaluation of speech recognition models, ensuring their robustness, accuracy, and adaptability to diverse languages and acoustic conditions. Additionally, by contrasting the machine-generated transcriptions against human-made subtitles, the Mi-Go tool can help pinpoint potential misuse of YouTube subtitles, like search engine optimization.
本文介绍的 Mi-Go 是一款工具,旨在评估通用语音识别机器学习模型在不同真实场景中的性能和适应性。该工具利用 YouTube 作为丰富且持续更新的数据源,考虑了多种语言、口音、方言、说话风格和音频质量水平。为了证明该工具的有效性,我们使用 Mi-Go 进行了一项实验,以评估最先进的自动语音识别机器学习模型。评估共涉及 141 个随机选择的 YouTube 视频。结果表明,YouTube 是评估语音识别模型的宝贵数据源,可确保模型的稳健性、准确性以及对不同语言和声学条件的适应性。此外,通过将机器生成的转录内容与人工制作的字幕进行对比,Mi-Go 工具可以帮助确定 YouTube 字幕的潜在滥用情况,如搜索引擎优化。
{"title":"Mi-Go: tool which uses YouTube as data source for evaluating general-purpose speech recognition machine learning models","authors":"Tomasz Wojnar, Jarosław Hryszko, Adam Roman","doi":"10.1186/s13636-024-00343-9","DOIUrl":"https://doi.org/10.1186/s13636-024-00343-9","url":null,"abstract":"This article introduces Mi-Go, a tool aimed at evaluating the performance and adaptability of general-purpose speech recognition machine learning models across diverse real-world scenarios. The tool leverages YouTube as a rich and continuously updated data source, accounting for multiple languages, accents, dialects, speaking styles, and audio quality levels. To demonstrate the effectiveness of the tool, an experiment was conducted, by using Mi-Go to evaluate state-of-the-art automatic speech recognition machine learning models. The evaluation involved a total of 141 randomly selected YouTube videos. The results underscore the utility of YouTube as a valuable data source for evaluation of speech recognition models, ensuring their robustness, accuracy, and adaptability to diverse languages and acoustic conditions. Additionally, by contrasting the machine-generated transcriptions against human-made subtitles, the Mi-Go tool can help pinpoint potential misuse of YouTube subtitles, like search engine optimization.","PeriodicalId":49202,"journal":{"name":"Eurasip Journal on Audio Speech and Music Processing","volume":"43 1","pages":""},"PeriodicalIF":2.4,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140835484","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Exploring the power of pure attention mechanisms in blind room parameter estimation 探索纯注意力机制在盲室参数估计中的威力
IF 2.4 3区 计算机科学 Q2 ACOUSTICS Pub Date : 2024-04-24 DOI: 10.1186/s13636-024-00344-8
Chunxi Wang, Maoshen Jia, Meiran Li, Changchun Bao, Wenyu Jin
Dynamic parameterization of acoustic environments has drawn widespread attention in the field of audio processing. Precise representation of local room acoustic characteristics is crucial when designing audio filters for various audio rendering applications. Key parameters in this context include reverberation time (RT $$_{60}$$ ) and geometric room volume. In recent years, neural networks have been extensively applied in the task of blind room parameter estimation. However, there remains a question of whether pure attention mechanisms can achieve superior performance in this task. To address this issue, this study employs blind room parameter estimation based on monaural noisy speech signals. Various model architectures are investigated, including a proposed attention-based model. This model is a convolution-free Audio Spectrogram Transformer, utilizing patch splitting, attention mechanisms, and cross-modality transfer learning from a pretrained Vision Transformer. Experimental results suggest that the proposed attention mechanism-based model, relying purely on attention mechanisms without using convolution, exhibits significantly improved performance across various room parameter estimation tasks, especially with the help of dedicated pretraining and data augmentation schemes. Additionally, the model demonstrates more advantageous adaptability and robustness when handling variable-length audio inputs compared to existing methods.
声学环境的动态参数化已引起音频处理领域的广泛关注。在为各种音频渲染应用设计音频滤波器时,精确呈现房间的局部声学特性至关重要。其中的关键参数包括混响时间(RT $$_{60}$ )和房间几何容积。近年来,神经网络已被广泛应用于盲室参数估计任务中。然而,在这项任务中,纯粹的注意力机制是否能取得优异的性能仍是一个问题。为了解决这个问题,本研究采用了基于单耳噪声语音信号的盲室参数估计。研究了各种模型架构,包括一个基于注意力的模型。该模型是一个无卷积的音频频谱图变换器,利用了补丁分割、注意力机制和来自预训练视觉变换器的跨模态迁移学习。实验结果表明,所提出的基于注意力机制的模型纯粹依靠注意力机制而不使用卷积,在各种房间参数估计任务中表现出显著的性能提升,尤其是在专用预训练和数据增强方案的帮助下。此外,与现有方法相比,该模型在处理变长音频输入时表现出更强的适应性和鲁棒性。
{"title":"Exploring the power of pure attention mechanisms in blind room parameter estimation","authors":"Chunxi Wang, Maoshen Jia, Meiran Li, Changchun Bao, Wenyu Jin","doi":"10.1186/s13636-024-00344-8","DOIUrl":"https://doi.org/10.1186/s13636-024-00344-8","url":null,"abstract":"Dynamic parameterization of acoustic environments has drawn widespread attention in the field of audio processing. Precise representation of local room acoustic characteristics is crucial when designing audio filters for various audio rendering applications. Key parameters in this context include reverberation time (RT $$_{60}$$ ) and geometric room volume. In recent years, neural networks have been extensively applied in the task of blind room parameter estimation. However, there remains a question of whether pure attention mechanisms can achieve superior performance in this task. To address this issue, this study employs blind room parameter estimation based on monaural noisy speech signals. Various model architectures are investigated, including a proposed attention-based model. This model is a convolution-free Audio Spectrogram Transformer, utilizing patch splitting, attention mechanisms, and cross-modality transfer learning from a pretrained Vision Transformer. Experimental results suggest that the proposed attention mechanism-based model, relying purely on attention mechanisms without using convolution, exhibits significantly improved performance across various room parameter estimation tasks, especially with the help of dedicated pretraining and data augmentation schemes. Additionally, the model demonstrates more advantageous adaptability and robustness when handling variable-length audio inputs compared to existing methods.","PeriodicalId":49202,"journal":{"name":"Eurasip Journal on Audio Speech and Music Processing","volume":"10 1","pages":""},"PeriodicalIF":2.4,"publicationDate":"2024-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140805125","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Robust acoustic reflector localization using a modified EM algorithm 使用改进的电磁算法进行稳健的声反射器定位
IF 2.4 3区 计算机科学 Q2 ACOUSTICS Pub Date : 2024-04-18 DOI: 10.1186/s13636-024-00340-y
Usama Saqib, Mads Græsbøll Christensen, Jesper Rindom Jensen
In robotics, echolocation has been used to detect acoustic reflectors, e.g., walls, as it aids the robotic platform to navigate in darkness and also helps detect transparent surfaces. However, the transfer function or response of an acoustic system, e.g., loudspeakers/emitters, contributes to non-ideal behavior within the acoustic systems that can contribute to a phase lag due to propagation delay. This non-ideal response can hinder the performance of a time-of-arrival (TOA) estimator intended for acoustic reflector localization especially when the estimation of multiple reflections is required. In this paper, we, therefore, propose a robust expectation-maximization (EM) algorithm that takes into account the response of acoustic systems to enhance the TOA estimation accuracy when estimating multiple reflections when the robot is placed in a corner of a room. A non-ideal transfer function is built with two parameters, which are estimated recursively within the estimator. To test the proposed method, a hardware proof-of-concept setup was built with two different designs. The experimental results show that the proposed method could detect an acoustic reflector up to a distance of 1.6 m with $$60%$$ accuracy under the signal-to-noise ratio (SNR) of 0 dB. Compared to the state-of-the-art EM algorithm, our proposed method provides improved performance when estimating TOA by $$10%$$ under a low SNR value.
在机器人技术中,回声定位被用于探测声反射器,如墙壁,因为它有助于机器人平台在黑暗中导航,也有助于探测透明表面。然而,声学系统(如扬声器/发射器)的传递函数或响应会导致声学系统内的非理想行为,从而由于传播延迟而产生相位滞后。这种非理想响应会妨碍用于声反射体定位的到达时间(TOA)估计器的性能,尤其是在需要估计多次反射的情况下。因此,我们在本文中提出了一种稳健的期望最大化(EM)算法,该算法考虑到了声学系统的响应,以提高机器人放置在房间角落时估计多重反射时的 TOA 估计精度。利用两个参数建立了一个非理想传递函数,并在估算器中对这两个参数进行递归估算。为了测试所提出的方法,我们用两种不同的设计搭建了一个硬件概念验证装置。实验结果表明,在信噪比(SNR)为 0 dB 的情况下,所提出的方法能以 60%$$ 的精度检测到距离为 1.6 m 的声反射器。与最先进的电磁算法相比,我们提出的方法在低信噪比下估计 TOA 的性能提高了 $$10%$。
{"title":"Robust acoustic reflector localization using a modified EM algorithm","authors":"Usama Saqib, Mads Græsbøll Christensen, Jesper Rindom Jensen","doi":"10.1186/s13636-024-00340-y","DOIUrl":"https://doi.org/10.1186/s13636-024-00340-y","url":null,"abstract":"In robotics, echolocation has been used to detect acoustic reflectors, e.g., walls, as it aids the robotic platform to navigate in darkness and also helps detect transparent surfaces. However, the transfer function or response of an acoustic system, e.g., loudspeakers/emitters, contributes to non-ideal behavior within the acoustic systems that can contribute to a phase lag due to propagation delay. This non-ideal response can hinder the performance of a time-of-arrival (TOA) estimator intended for acoustic reflector localization especially when the estimation of multiple reflections is required. In this paper, we, therefore, propose a robust expectation-maximization (EM) algorithm that takes into account the response of acoustic systems to enhance the TOA estimation accuracy when estimating multiple reflections when the robot is placed in a corner of a room. A non-ideal transfer function is built with two parameters, which are estimated recursively within the estimator. To test the proposed method, a hardware proof-of-concept setup was built with two different designs. The experimental results show that the proposed method could detect an acoustic reflector up to a distance of 1.6 m with $$60%$$ accuracy under the signal-to-noise ratio (SNR) of 0 dB. Compared to the state-of-the-art EM algorithm, our proposed method provides improved performance when estimating TOA by $$10%$$ under a low SNR value.","PeriodicalId":49202,"journal":{"name":"Eurasip Journal on Audio Speech and Music Processing","volume":"9 1","pages":""},"PeriodicalIF":2.4,"publicationDate":"2024-04-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140616242","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Supervised Attention Multi-Scale Temporal Convolutional Network for monaural speech enhancement 用于单声道语音增强的监督注意多尺度时空卷积网络
IF 2.4 3区 计算机科学 Q2 ACOUSTICS Pub Date : 2024-04-11 DOI: 10.1186/s13636-024-00341-x
Zehua Zhang, Lu Zhang, Xuyi Zhuang, Yukun Qian, Mingjiang Wang
Speech signals are often distorted by reverberation and noise, with a widely distributed signal-to-noise ratio (SNR). To address this, our study develops robust, deep neural network (DNN)-based speech enhancement methods. We reproduce several DNN-based monaural speech enhancement methods and outline a strategy for constructing datasets. This strategy, validated through experimental reproductions, has effectively enhanced the denoising efficiency and robustness of the models. Then, we propose a causal speech enhancement system named Supervised Attention Multi-Scale Temporal Convolutional Network (SA-MSTCN). SA-MSTCN extracts the complex compressed spectrum (CCS) for input encoding and employs complex ratio masking (CRM) for output decoding. The supervised attention module, a lightweight addition to SA-MSTCN, guides feature extraction. Experiment results show that the supervised attention module effectively improves noise reduction performance with a minor increase in computational cost. The multi-scale temporal convolutional network refines the perceptual field and better reconstructs the speech signal. Overall, SA-MSTCN not only achieves state-of-the-art speech quality and intelligibility compared to other methods but also maintains stable denoising performance across various environments.
语音信号通常会受到混响和噪声的干扰,信噪比(SNR)分布广泛。为此,我们的研究开发了基于深度神经网络(DNN)的鲁棒性语音增强方法。我们重现了几种基于 DNN 的单声道语音增强方法,并概述了构建数据集的策略。通过实验验证,这一策略有效提高了模型的去噪效率和鲁棒性。然后,我们提出了一种因果语音增强系统,名为 "监督注意多尺度时空卷积网络(SA-MSTCN)"。SA-MSTCN 提取复杂压缩频谱(CCS)进行输入编码,并采用复杂比率掩蔽(CRM)进行输出解码。监督注意力模块是 SA-MSTCN 的轻量级附加模块,用于指导特征提取。实验结果表明,监督注意力模块能有效提高降噪性能,而计算成本仅略有增加。多尺度时空卷积网络完善了感知场,更好地重建了语音信号。总之,与其他方法相比,SA-MSTCN 不仅能达到最先进的语音质量和可懂度,而且能在各种环境下保持稳定的去噪性能。
{"title":"Supervised Attention Multi-Scale Temporal Convolutional Network for monaural speech enhancement","authors":"Zehua Zhang, Lu Zhang, Xuyi Zhuang, Yukun Qian, Mingjiang Wang","doi":"10.1186/s13636-024-00341-x","DOIUrl":"https://doi.org/10.1186/s13636-024-00341-x","url":null,"abstract":"Speech signals are often distorted by reverberation and noise, with a widely distributed signal-to-noise ratio (SNR). To address this, our study develops robust, deep neural network (DNN)-based speech enhancement methods. We reproduce several DNN-based monaural speech enhancement methods and outline a strategy for constructing datasets. This strategy, validated through experimental reproductions, has effectively enhanced the denoising efficiency and robustness of the models. Then, we propose a causal speech enhancement system named Supervised Attention Multi-Scale Temporal Convolutional Network (SA-MSTCN). SA-MSTCN extracts the complex compressed spectrum (CCS) for input encoding and employs complex ratio masking (CRM) for output decoding. The supervised attention module, a lightweight addition to SA-MSTCN, guides feature extraction. Experiment results show that the supervised attention module effectively improves noise reduction performance with a minor increase in computational cost. The multi-scale temporal convolutional network refines the perceptual field and better reconstructs the speech signal. Overall, SA-MSTCN not only achieves state-of-the-art speech quality and intelligibility compared to other methods but also maintains stable denoising performance across various environments.","PeriodicalId":49202,"journal":{"name":"Eurasip Journal on Audio Speech and Music Processing","volume":"299 1","pages":""},"PeriodicalIF":2.4,"publicationDate":"2024-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140596436","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Correction: DeepDet: YAMNet with BottleNeck Attention Module (BAM) for TTS synthesis detection 更正:DeepDet:用于 TTS 合成检测的 YAMNet 和 BottleNeck Attention Module (BAM)
IF 2.4 3区 计算机科学 Q2 ACOUSTICS Pub Date : 2024-04-11 DOI: 10.1186/s13636-024-00342-w
Rabbia Mahum, Aun Irtaza, Ali Javed, Haitham A. Mahmoud, Haseeb Hassan
<p><b>Correction</b><b>:</b> <b>EURASIP J. Audio Speech Music Process 2024, 18 (2024)</b></p><p><b>https://doi.org/10.1186/s13636-024-00335-9</b></p><br/><p>Following publication of the original article [1], we have been notified that:</p><p>-Equation 9 was missing from the paper, therefore all equations have been renumbered.</p><p>-The title should be modified from “DeepDet: YAMNet with BottleNeck Attention Module (BAM) TTS synthesis detection” to “DeepDet: YAMNet with BottleNeck Attention Module (BAM) for TTS synthesis detection”.</p><p>-The Acknowledgements section needs to include the following statement:</p><p>The authors extend their appreciation to King Saud University for funding this work through Researchers Supporting Project number (RSPD2024R1006), King Saud University, Riyadh, Saudi Arabia.</p><p>-The below text in the Funding section has been removed:</p><p>The authors extend their appreciation to the Deputyship for Research and Innovation, “Ministry of Education” in Saudi Arabia for funding this research (IFKSUOR3–561–2).</p><p>The original article has been corrected.</p><ol data-track-component="outbound reference"><li data-counter="1."><p>Mahum et al., DeepDet: YAMNet with BottleNeck Attention Module (BAM) for TTS synthesis detection. EURASIP J. Audio Speech Music Process. <b>2024</b>, 18 (2024). https://doi.org/10.1186/s13636-024-00335-9</p><p>Article Google Scholar </p></li></ol><p>Download references<svg aria-hidden="true" focusable="false" height="16" role="img" width="16"><use xlink:href="#icon-eds-i-download-medium" xmlns:xlink="http://www.w3.org/1999/xlink"></use></svg></p><h3>Authors and Affiliations</h3><ol><li><p>Computer Science Department, UET Taxila, Taxila, Pakistan</p><p>Rabbia Mahum & Aun Irtaza</p></li><li><p>Software Engineering Department, UET Taxila, Taxila, Pakistan</p><p>Ali Javed</p></li><li><p>Industrial Engineering Department, College of Engineering, King Saud University, 11421, Riyadh, Saudi Arabia</p><p>Haitham A. Mahmoud</p></li><li><p>College of Big Data and Internet, Shenzhen Technology University (SZTU), Shenzhen, China</p><p>Haseeb Hassan</p></li></ol><span>Authors</span><ol><li><span>Rabbia Mahum</span>View author publications<p>You can also search for this author in <span>PubMed<span> </span>Google Scholar</span></p></li><li><span>Aun Irtaza</span>View author publications<p>You can also search for this author in <span>PubMed<span> </span>Google Scholar</span></p></li><li><span>Ali Javed</span>View author publications<p>You can also search for this author in <span>PubMed<span> </span>Google Scholar</span></p></li><li><span>Haitham A. Mahmoud</span>View author publications<p>You can also search for this author in <span>PubMed<span> </span>Google Scholar</span></p></li><li><span>Haseeb Hassan</span>View author publications<p>You can also search for this author in <span>PubMed<span> </span>Google Scholar</span></p></li></ol><h3>Corresponding author</h3><p>Correspondence to Rabbia Mahum.</p><p><b>O
更正:EURASIP J. Audio Speech Music Process 2024, 18 (2024)https://doi.org/10.1186/s13636-024-00335-9Following 原文[1]发表后,我们被告知:-论文中缺少公式 9,因此所有公式已重新编号。-标题应从 "DeepDet:标题应从 "DeepDet: YAMNet with BottleNeck Attention Module (BAM) TTS synthesis detection "修改为 "DeepDet:-致谢部分需要包括以下声明:作者感谢沙特国王大学通过研究人员支持项目编号(RSPD2024R1006)资助这项工作,沙特国王大学,沙特阿拉伯利雅得。删除了 "资助 "部分的以下内容:作者感谢沙特阿拉伯 "教育部 "研究与创新部(Deputyship for Research and Innovation)资助本研究(IFKSUOR3-561-2):YAMNet with BottleNeck Attention Module (BAM) for TTS synthesis detection.EURASIP J. Audio Speech Music Process.2024,18 (2024)。https://doi.org/10.1186/s13636-024-00335-9Article Google Scholar 下载参考文献作者和工作单位巴基斯坦塔克西拉,塔克西拉大学计算机科学系Rabbia Mahum & Aun Irtaza巴基斯坦塔克西拉,塔克西拉大学软件工程系Ali Javed沙特国王大学工程学院工业工程系,11421,利雅得,沙特阿拉伯Haitham A。MahmoudCollege of Big Data and Internet, Shenzhen Technology University (SZTU), Shenzhen, ChinaHaseeb Hassan作者Rabbia Mahum查看作者发表的文章您也可以在PubMed Google ScholarAun Irtaza查看作者发表的文章您也可以在PubMed Google ScholarAli Javed查看作者发表的文章您也可以在PubMed Google ScholarHaitham A.MahmoudView author publications您也可以在PubMed Google Scholar中搜索该作者Haseeb HassanView author publications您也可以在PubMed Google Scholar中搜索该作者Corresponding authorCorrespondence to Rabbia Mahum.开放存取本文采用知识共享署名 4.0 国际许可协议进行许可,该协议允许以任何媒介或格式使用、共享、改编、分发和复制,只要您适当注明原作者和来源,提供知识共享许可协议的链接,并注明是否进行了更改。本文中的图片或其他第三方材料均包含在文章的知识共享许可协议中,除非在材料的署名栏中另有说明。如果材料未包含在文章的知识共享许可协议中,且您打算使用的材料不符合法律规定或超出许可使用范围,则您需要直接从版权所有者处获得许可。要查看该许可的副本,请访问 http://creativecommons.org/licenses/by/4.0/.Reprints and permissionsCite this articleMahum, R., Irtaza, A., Javed, A. et al. Correction:DeepDet:用于 TTS 合成检测的 YAMNet 与 BottleNeck Attention Module (BAM).J audio speech music proc.2024, 21 (2024). https://doi.org/10.1186/s13636-024-00342-wDownload citationPublished: 11 April 2024DOI: https://doi.org/10.1186/s13636-024-00342-wShare this articleAnyone you share the following link with will be able to read this content:Get shareable linkSorry, a shareable link is not currently available for this article.Copy to clipboard Provided by the Springer Nature SharedIt content-sharing initiative
{"title":"Correction: DeepDet: YAMNet with BottleNeck Attention Module (BAM) for TTS synthesis detection","authors":"Rabbia Mahum, Aun Irtaza, Ali Javed, Haitham A. Mahmoud, Haseeb Hassan","doi":"10.1186/s13636-024-00342-w","DOIUrl":"https://doi.org/10.1186/s13636-024-00342-w","url":null,"abstract":"&lt;p&gt;&lt;b&gt;Correction&lt;/b&gt;&lt;b&gt;:&lt;/b&gt; &lt;b&gt;EURASIP J. Audio Speech Music Process 2024, 18 (2024)&lt;/b&gt;&lt;/p&gt;&lt;p&gt;&lt;b&gt;https://doi.org/10.1186/s13636-024-00335-9&lt;/b&gt;&lt;/p&gt;&lt;br/&gt;&lt;p&gt;Following publication of the original article [1], we have been notified that:&lt;/p&gt;&lt;p&gt;-Equation 9 was missing from the paper, therefore all equations have been renumbered.&lt;/p&gt;&lt;p&gt;-The title should be modified from “DeepDet: YAMNet with BottleNeck Attention Module (BAM) TTS synthesis detection” to “DeepDet: YAMNet with BottleNeck Attention Module (BAM) for TTS synthesis detection”.&lt;/p&gt;&lt;p&gt;-The Acknowledgements section needs to include the following statement:&lt;/p&gt;&lt;p&gt;The authors extend their appreciation to King Saud University for funding this work through Researchers Supporting Project number (RSPD2024R1006), King Saud University, Riyadh, Saudi Arabia.&lt;/p&gt;&lt;p&gt;-The below text in the Funding section has been removed:&lt;/p&gt;&lt;p&gt;The authors extend their appreciation to the Deputyship for Research and Innovation, “Ministry of Education” in Saudi Arabia for funding this research (IFKSUOR3–561–2).&lt;/p&gt;&lt;p&gt;The original article has been corrected.&lt;/p&gt;&lt;ol data-track-component=\"outbound reference\"&gt;&lt;li data-counter=\"1.\"&gt;&lt;p&gt;Mahum et al., DeepDet: YAMNet with BottleNeck Attention Module (BAM) for TTS synthesis detection. EURASIP J. Audio Speech Music Process. &lt;b&gt;2024&lt;/b&gt;, 18 (2024). https://doi.org/10.1186/s13636-024-00335-9&lt;/p&gt;&lt;p&gt;Article Google Scholar &lt;/p&gt;&lt;/li&gt;&lt;/ol&gt;&lt;p&gt;Download references&lt;svg aria-hidden=\"true\" focusable=\"false\" height=\"16\" role=\"img\" width=\"16\"&gt;&lt;use xlink:href=\"#icon-eds-i-download-medium\" xmlns:xlink=\"http://www.w3.org/1999/xlink\"&gt;&lt;/use&gt;&lt;/svg&gt;&lt;/p&gt;&lt;h3&gt;Authors and Affiliations&lt;/h3&gt;&lt;ol&gt;&lt;li&gt;&lt;p&gt;Computer Science Department, UET Taxila, Taxila, Pakistan&lt;/p&gt;&lt;p&gt;Rabbia Mahum &amp; Aun Irtaza&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Software Engineering Department, UET Taxila, Taxila, Pakistan&lt;/p&gt;&lt;p&gt;Ali Javed&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Industrial Engineering Department, College of Engineering, King Saud University, 11421, Riyadh, Saudi Arabia&lt;/p&gt;&lt;p&gt;Haitham A. Mahmoud&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;College of Big Data and Internet, Shenzhen Technology University (SZTU), Shenzhen, China&lt;/p&gt;&lt;p&gt;Haseeb Hassan&lt;/p&gt;&lt;/li&gt;&lt;/ol&gt;&lt;span&gt;Authors&lt;/span&gt;&lt;ol&gt;&lt;li&gt;&lt;span&gt;Rabbia Mahum&lt;/span&gt;View author publications&lt;p&gt;You can also search for this author in &lt;span&gt;PubMed&lt;span&gt; &lt;/span&gt;Google Scholar&lt;/span&gt;&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;span&gt;Aun Irtaza&lt;/span&gt;View author publications&lt;p&gt;You can also search for this author in &lt;span&gt;PubMed&lt;span&gt; &lt;/span&gt;Google Scholar&lt;/span&gt;&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;span&gt;Ali Javed&lt;/span&gt;View author publications&lt;p&gt;You can also search for this author in &lt;span&gt;PubMed&lt;span&gt; &lt;/span&gt;Google Scholar&lt;/span&gt;&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;span&gt;Haitham A. Mahmoud&lt;/span&gt;View author publications&lt;p&gt;You can also search for this author in &lt;span&gt;PubMed&lt;span&gt; &lt;/span&gt;Google Scholar&lt;/span&gt;&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;span&gt;Haseeb Hassan&lt;/span&gt;View author publications&lt;p&gt;You can also search for this author in &lt;span&gt;PubMed&lt;span&gt; &lt;/span&gt;Google Scholar&lt;/span&gt;&lt;/p&gt;&lt;/li&gt;&lt;/ol&gt;&lt;h3&gt;Corresponding author&lt;/h3&gt;&lt;p&gt;Correspondence to Rabbia Mahum.&lt;/p&gt;&lt;p&gt;&lt;b&gt;O","PeriodicalId":49202,"journal":{"name":"Eurasip Journal on Audio Speech and Music Processing","volume":"271 1","pages":""},"PeriodicalIF":2.4,"publicationDate":"2024-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140596322","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DeepDet: YAMNet with BottleNeck Attention Module (BAM) for TTS synthesis detection DeepDet:带有瓶颈注意模块 (BAM) 的 YAMNet 用于 TTS 合成检测
IF 2.4 3区 计算机科学 Q2 ACOUSTICS Pub Date : 2024-04-01 DOI: 10.1186/s13636-024-00335-9
Rabbia Mahum, Aun Irtaza, Ali Javed, Haitham A. Mahmoud, Haseeb Hassan
Spoofed speeches are becoming a big threat to society due to advancements in artificial intelligence techniques. Therefore, there must be an automated spoofing detector that can be integrated into automatic speaker verification (ASV) systems. In this study, we recommend a novel and robust model, named DeepDet, based on deep-layered architecture, to categorize speech into two classes: spoofed and bonafide. DeepDet is an improved model based on Yet Another Mobile Network (YAMNet) employing a customized MobileNet combined with a bottleneck attention module (BAM). First, we convert audio into mel-spectrograms that consist of time–frequency representations on mel-scale. Second, we trained our deep layered model using the extracted mel-spectrograms on a Logical Access (LA) set, including synthesized speeches and voice conversions of the ASVspoof-2019 dataset. In the end, we classified the audios, utilizing our trained binary classifier. More precisely, we utilized the power of layered architecture and guided attention that can discern the spoofed speech from bonafide samples. Our proposed improved model employs depth-wise linearly separate convolutions, which makes our model lighter weight than existing techniques. Furthermore, we implemented extensive experiments to assess the performance of the suggested model using the ASVspoof 2019 corpus. We attained an equal error rate (EER) of 0.042% on Logical Access (LA), whereas 0.43% on Physical Access (PA) attacks. Therefore, the performance of the proposed model is significant on the ASVspoof 2019 dataset and indicates the effectiveness of the DeepDet over existing spoofing detectors. Additionally, our proposed model is robust enough that can identify the unseen spoofed audios and classifies the several attacks accurately.
由于人工智能技术的进步,欺骗性演讲正成为社会的一大威胁。因此,必须有一种可集成到自动语音验证(ASV)系统中的自动欺骗检测器。在本研究中,我们推荐了一种基于深度分层架构的新型稳健模型,名为 DeepDet,可将语音分为两类:欺骗语音和真实语音。DeepDet 是基于 Yet Another Mobile Network (YAMNet) 的改进模型,采用了定制的 MobileNet 和瓶颈注意模块 (BAM) 相结合。首先,我们将音频转换成由 mel 尺度上的时频表示组成的 mel 频谱图。其次,我们在逻辑访问(LA)集(包括合成演讲和 ASVspoof-2019 数据集的语音转换)上使用提取的 mel 频谱训练我们的深度分层模型。最后,我们利用训练有素的二元分类器对音频进行了分类。更确切地说,我们利用了分层架构和引导注意力的力量,可以从真实样本中分辨出欺骗性语音。我们提出的改进模型采用了深度线性分离卷积,这使得我们的模型比现有技术重量更轻。此外,我们使用 ASVspoof 2019 语料库进行了大量实验,以评估所建议模型的性能。我们在逻辑访问(LA)攻击中获得了 0.042% 的相等错误率(EER),而在物理访问(PA)攻击中获得了 0.43% 的相等错误率(EER)。因此,所提模型在 ASVspoof 2019 数据集上的表现非常显著,表明 DeepDet 比现有的欺骗检测器更有效。此外,我们提出的模型具有足够的鲁棒性,能够识别未见过的欺骗音频,并对几种攻击进行准确分类。
{"title":"DeepDet: YAMNet with BottleNeck Attention Module (BAM) for TTS synthesis detection","authors":"Rabbia Mahum, Aun Irtaza, Ali Javed, Haitham A. Mahmoud, Haseeb Hassan","doi":"10.1186/s13636-024-00335-9","DOIUrl":"https://doi.org/10.1186/s13636-024-00335-9","url":null,"abstract":"Spoofed speeches are becoming a big threat to society due to advancements in artificial intelligence techniques. Therefore, there must be an automated spoofing detector that can be integrated into automatic speaker verification (ASV) systems. In this study, we recommend a novel and robust model, named DeepDet, based on deep-layered architecture, to categorize speech into two classes: spoofed and bonafide. DeepDet is an improved model based on Yet Another Mobile Network (YAMNet) employing a customized MobileNet combined with a bottleneck attention module (BAM). First, we convert audio into mel-spectrograms that consist of time–frequency representations on mel-scale. Second, we trained our deep layered model using the extracted mel-spectrograms on a Logical Access (LA) set, including synthesized speeches and voice conversions of the ASVspoof-2019 dataset. In the end, we classified the audios, utilizing our trained binary classifier. More precisely, we utilized the power of layered architecture and guided attention that can discern the spoofed speech from bonafide samples. Our proposed improved model employs depth-wise linearly separate convolutions, which makes our model lighter weight than existing techniques. Furthermore, we implemented extensive experiments to assess the performance of the suggested model using the ASVspoof 2019 corpus. We attained an equal error rate (EER) of 0.042% on Logical Access (LA), whereas 0.43% on Physical Access (PA) attacks. Therefore, the performance of the proposed model is significant on the ASVspoof 2019 dataset and indicates the effectiveness of the DeepDet over existing spoofing detectors. Additionally, our proposed model is robust enough that can identify the unseen spoofed audios and classifies the several attacks accurately.","PeriodicalId":49202,"journal":{"name":"Eurasip Journal on Audio Speech and Music Processing","volume":"1 1","pages":""},"PeriodicalIF":2.4,"publicationDate":"2024-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140596323","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-rate modulation encoding via unsupervised learning for audio event detection 通过无监督学习为音频事件检测进行多速率调制编码
IF 2.4 3区 计算机科学 Q2 ACOUSTICS Pub Date : 2024-04-01 DOI: 10.1186/s13636-024-00339-5
Sandeep Reddy Kothinti, Mounya Elhilali
Technologies in healthcare, smart homes, security, ecology, and entertainment all deploy audio event detection (AED) in order to detect sound events in an audio recording. Effective AED techniques rely heavily on supervised or semi-supervised models to capture the wide range of dynamics spanned by sound events in order to achieve temporally precise boundaries and accurate event classification. These methods require extensive collections of labeled or weakly labeled in-domain data, which is costly and labor-intensive. Importantly, these approaches do not fully leverage the inherent variability and range of dynamics across sound events, aspects that can be effectively identified through unsupervised methods. The present work proposes an approach based on multi-rate autoencoders that are pretrained in an unsupervised way to leverage unlabeled audio data and ultimately learn the rich temporal dynamics inherent in natural sound events. This approach utilizes parallel autoencoders that achieve decompositions of the modulation spectrum along different bands. In addition, we introduce a rate-selective temporal contrastive loss to align the training objective with event detection metrics. Optimizing the configuration of multi-rate encoders and the temporal contrastive loss leads to notable improvements in domestic sound event detection in the context of the DCASE challenge.
医疗保健、智能家居、安防、生态和娱乐领域的技术都采用了音频事件检测(AED)技术,以检测音频记录中的声音事件。有效的 AED 技术在很大程度上依赖于监督或半监督模型,以捕捉声音事件所跨越的广泛动态范围,从而实现时间上精确的边界和准确的事件分类。这些方法需要收集大量标注或弱标注的域内数据,成本高昂且劳动密集。重要的是,这些方法不能充分利用声音事件固有的可变性和动态范围,而这些方面可以通过无监督方法有效识别。本研究提出了一种基于多速率自动编码器的方法,该方法以无监督的方式对自动编码器进行预训练,以利用未标记的音频数据,最终学习自然声音事件中固有的丰富时间动态。这种方法利用并行自动编码器实现对不同频带调制频谱的分解。此外,我们还引入了速率选择性时态对比损失,使训练目标与事件检测指标保持一致。通过优化多速率编码器的配置和时间对比损失,在 DCASE 挑战赛的背景下,国内声音事件检测有了显著改善。
{"title":"Multi-rate modulation encoding via unsupervised learning for audio event detection","authors":"Sandeep Reddy Kothinti, Mounya Elhilali","doi":"10.1186/s13636-024-00339-5","DOIUrl":"https://doi.org/10.1186/s13636-024-00339-5","url":null,"abstract":"Technologies in healthcare, smart homes, security, ecology, and entertainment all deploy audio event detection (AED) in order to detect sound events in an audio recording. Effective AED techniques rely heavily on supervised or semi-supervised models to capture the wide range of dynamics spanned by sound events in order to achieve temporally precise boundaries and accurate event classification. These methods require extensive collections of labeled or weakly labeled in-domain data, which is costly and labor-intensive. Importantly, these approaches do not fully leverage the inherent variability and range of dynamics across sound events, aspects that can be effectively identified through unsupervised methods. The present work proposes an approach based on multi-rate autoencoders that are pretrained in an unsupervised way to leverage unlabeled audio data and ultimately learn the rich temporal dynamics inherent in natural sound events. This approach utilizes parallel autoencoders that achieve decompositions of the modulation spectrum along different bands. In addition, we introduce a rate-selective temporal contrastive loss to align the training objective with event detection metrics. Optimizing the configuration of multi-rate encoders and the temporal contrastive loss leads to notable improvements in domestic sound event detection in the context of the DCASE challenge.","PeriodicalId":49202,"journal":{"name":"Eurasip Journal on Audio Speech and Music Processing","volume":"55 1","pages":""},"PeriodicalIF":2.4,"publicationDate":"2024-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140596313","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Synthesis of soundfields through irregular loudspeaker arrays based on convolutional neural networks 基于卷积神经网络的不规则扬声器阵列声场合成技术
IF 2.4 3区 计算机科学 Q2 ACOUSTICS Pub Date : 2024-03-28 DOI: 10.1186/s13636-024-00337-7
Luca Comanducci, Fabio Antonacci, Augusto Sarti
Most soundfield synthesis approaches deal with extensive and regular loudspeaker arrays, which are often not suitable for home audio systems, due to physical space constraints. In this article, we propose a technique for soundfield synthesis through more easily deployable irregular loudspeaker arrays, i.e., where the spacing between loudspeakers is not constant, based on deep learning. The input are the driving signals obtained through a plane wave decomposition-based technique. While the considered driving signals are able to correctly reproduce the soundfield with a regular array, they show degraded performances when using irregular setups. Through a complex-valued convolutional neural network (CNN), we modify the driving signals in order to compensate the errors in the reproduction of the desired soundfield. Since no ground truth driving signals are available for the compensated ones, we train the model by calculating the loss between the desired soundfield at a number of control points and the one obtained through the driving signals estimated by the network. The proposed model must be retrained for each irregular loudspeaker array configuration. Numerical results show better reproduction accuracy with respect to the plane wave decomposition-based technique, pressure-matching approach, and linear optimizers for driving signal compensation.
由于物理空间的限制,大多数声场合成方法处理的是广泛而规则的扬声器阵列,而这些阵列往往不适合家庭音频系统。在本文中,我们提出了一种基于深度学习的声场合成技术,通过更易于部署的不规则扬声器阵列(即扬声器之间的间距不恒定)进行声场合成。输入是通过基于平面波分解技术获得的驱动信号。虽然所考虑的驱动信号能够正确再现规则阵列的声场,但在使用不规则设置时,它们的性能却有所下降。通过复值卷积神经网络(CNN),我们对驱动信号进行了修改,以补偿在重现理想声场时出现的误差。由于补偿后的声场没有地面真实的驱动信号,因此我们通过计算若干控制点上的理想声场与通过网络估算的驱动信号获得的声场之间的损失来训练模型。提议的模型必须针对每个不规则扬声器阵列配置进行重新训练。数值结果表明,与基于平面波分解的技术、压力匹配方法和用于驱动信号补偿的线性优化器相比,该模型具有更高的再现精度。
{"title":"Synthesis of soundfields through irregular loudspeaker arrays based on convolutional neural networks","authors":"Luca Comanducci, Fabio Antonacci, Augusto Sarti","doi":"10.1186/s13636-024-00337-7","DOIUrl":"https://doi.org/10.1186/s13636-024-00337-7","url":null,"abstract":"Most soundfield synthesis approaches deal with extensive and regular loudspeaker arrays, which are often not suitable for home audio systems, due to physical space constraints. In this article, we propose a technique for soundfield synthesis through more easily deployable irregular loudspeaker arrays, i.e., where the spacing between loudspeakers is not constant, based on deep learning. The input are the driving signals obtained through a plane wave decomposition-based technique. While the considered driving signals are able to correctly reproduce the soundfield with a regular array, they show degraded performances when using irregular setups. Through a complex-valued convolutional neural network (CNN), we modify the driving signals in order to compensate the errors in the reproduction of the desired soundfield. Since no ground truth driving signals are available for the compensated ones, we train the model by calculating the loss between the desired soundfield at a number of control points and the one obtained through the driving signals estimated by the network. The proposed model must be retrained for each irregular loudspeaker array configuration. Numerical results show better reproduction accuracy with respect to the plane wave decomposition-based technique, pressure-matching approach, and linear optimizers for driving signal compensation.","PeriodicalId":49202,"journal":{"name":"Eurasip Journal on Audio Speech and Music Processing","volume":"61 1","pages":""},"PeriodicalIF":2.4,"publicationDate":"2024-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140313198","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An end-to-end approach for blindly rendering a virtual sound source in an audio augmented reality environment 在音频增强现实环境中盲目渲染虚拟声源的端到端方法
IF 2.4 3区 计算机科学 Q2 ACOUSTICS Pub Date : 2024-03-27 DOI: 10.1186/s13636-024-00338-6
Shivam Saini, Isaac Engel, Jürgen Peissig
Audio augmented reality (AAR), a prominent topic in the field of audio, requires understanding the listening environment of the user for rendering an authentic virtual auditory object. Reverberation time ( $$RT_{60}$$ ) is a predominant metric for the characterization of room acoustics and numerous approaches have been proposed to estimate it blindly from a reverberant speech signal. However, a single $$RT_{60}$$ value may not be sufficient to correctly describe and render the acoustics of a room. This contribution presents a method for the estimation of multiple room acoustic parameters required to render close-to-accurate room acoustics in an unknown environment. It is shown how these parameters can be estimated blindly using an audio transformer that can be deployed on a mobile device. Furthermore, the paper also discusses the use of the estimated room acoustic parameters to find a similar room from a dataset of real BRIRs that can be further used for rendering the virtual audio source. Additionally, a novel binaural room impulse response (BRIR) augmentation technique to overcome the limitation of inadequate data is proposed. Finally, the proposed method is validated perceptually by means of a listening test.
音频增强现实(AAR)是音频领域的一个重要课题,它要求了解用户的听觉环境,以呈现真实的虚拟听觉对象。混响时间($RT_{60}$$)是表征房间声学特性的主要指标,已有许多方法被提出,用于从混响语音信号中盲目估算混响时间。然而,单一的 $$RT_{60}$ 值可能不足以正确描述和呈现房间的声学效果。本文介绍了一种估算多个房间声学参数的方法,这些参数是在未知环境中呈现接近准确的房间声学效果所必需的。文中展示了如何使用可部署在移动设备上的音频变压器盲法估算这些参数。此外,论文还讨论了如何利用估算出的房间声学参数,从真实 BRIR 数据集中找到类似的房间,并进一步用于渲染虚拟音源。此外,还提出了一种新颖的双耳房间脉冲响应(BRIR)增强技术,以克服数据不足的限制。最后,通过听力测试对所提出的方法进行了感知验证。
{"title":"An end-to-end approach for blindly rendering a virtual sound source in an audio augmented reality environment","authors":"Shivam Saini, Isaac Engel, Jürgen Peissig","doi":"10.1186/s13636-024-00338-6","DOIUrl":"https://doi.org/10.1186/s13636-024-00338-6","url":null,"abstract":"Audio augmented reality (AAR), a prominent topic in the field of audio, requires understanding the listening environment of the user for rendering an authentic virtual auditory object. Reverberation time ( $$RT_{60}$$ ) is a predominant metric for the characterization of room acoustics and numerous approaches have been proposed to estimate it blindly from a reverberant speech signal. However, a single $$RT_{60}$$ value may not be sufficient to correctly describe and render the acoustics of a room. This contribution presents a method for the estimation of multiple room acoustic parameters required to render close-to-accurate room acoustics in an unknown environment. It is shown how these parameters can be estimated blindly using an audio transformer that can be deployed on a mobile device. Furthermore, the paper also discusses the use of the estimated room acoustic parameters to find a similar room from a dataset of real BRIRs that can be further used for rendering the virtual audio source. Additionally, a novel binaural room impulse response (BRIR) augmentation technique to overcome the limitation of inadequate data is proposed. Finally, the proposed method is validated perceptually by means of a listening test.","PeriodicalId":49202,"journal":{"name":"Eurasip Journal on Audio Speech and Music Processing","volume":"117 1","pages":""},"PeriodicalIF":2.4,"publicationDate":"2024-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140313191","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Eurasip Journal on Audio Speech and Music Processing
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1