首页 > 最新文献

2013 IEEE Workshop on Applications of Signal Processing to Audio and Acoustics最新文献

英文 中文
Influence of secondary path estimation errors on the performance of ANC-motivated noise reduction algorithms for hearing aids 辅助路径估计误差对助听器ac驱动降噪算法性能的影响
Pub Date : 2013-10-01 DOI: 10.1109/WASPAA.2013.6701812
Derya Dalga, S. Doclo
Current noise reduction techniques for open-fitting hearing aids that only use the external microphones on the hearing aid typically disregard the occurrence of signal leakage through the open fitting, leading to a degraded noise reduction performance. Using an ear mould with an internal (so-called error) microphone provides information about the signal leakage and hence enables to improve the noise reduction performance. Recently, feedforward and combined feedforward-feedback active-noise-control-motivated (FF ANC and FF-FB ANC, respectively) algorithms for noise reduction have been presented for such open-fitting hearing aids. The noise reduction filters of these ANC-motivated algorithms depend on an estimate of the so-called secondary path between the receiver and the error microphone. In this paper, we analyze the influence of secondary path estimation errors on the performance of the ANC-motivated algorithms. For the FF ANC algorithm it is possible to derive a closed-form expression of the filter as a function of the secondary path estimation error and to derive limit values for the allowable secondary path estimation errors. In addition, simulations show that even when estimation errors occur the FF-FB ANC algorithm still outperforms the FF ANC algorithm.
目前的开放式助听器降噪技术仅使用助听器上的外部麦克风,通常忽略了通过开放式助听器发生的信号泄漏,导致降噪性能下降。使用带有内部(所谓的误差)麦克风的耳模可以提供有关信号泄漏的信息,从而能够提高降噪性能。近年来,前馈和组合前馈-反馈主动噪声控制驱动(FF ANC和FF- fb ANC)算法被提出用于这种开放拟合助听器的降噪。这些anc驱动算法的降噪滤波器依赖于对接收机和误差传声器之间所谓的二次路径的估计。在本文中,我们分析了次级路径估计误差对蚁群算法性能的影响。对于FF ANC算法,可以推导出滤波器作为次级路径估计误差函数的封闭表达式,并推导出允许的次级路径估计误差的极限值。此外,仿真结果表明,即使出现估计误差,FF- fb ANC算法仍然优于FF ANC算法。
{"title":"Influence of secondary path estimation errors on the performance of ANC-motivated noise reduction algorithms for hearing aids","authors":"Derya Dalga, S. Doclo","doi":"10.1109/WASPAA.2013.6701812","DOIUrl":"https://doi.org/10.1109/WASPAA.2013.6701812","url":null,"abstract":"Current noise reduction techniques for open-fitting hearing aids that only use the external microphones on the hearing aid typically disregard the occurrence of signal leakage through the open fitting, leading to a degraded noise reduction performance. Using an ear mould with an internal (so-called error) microphone provides information about the signal leakage and hence enables to improve the noise reduction performance. Recently, feedforward and combined feedforward-feedback active-noise-control-motivated (FF ANC and FF-FB ANC, respectively) algorithms for noise reduction have been presented for such open-fitting hearing aids. The noise reduction filters of these ANC-motivated algorithms depend on an estimate of the so-called secondary path between the receiver and the error microphone. In this paper, we analyze the influence of secondary path estimation errors on the performance of the ANC-motivated algorithms. For the FF ANC algorithm it is possible to derive a closed-form expression of the filter as a function of the secondary path estimation error and to derive limit values for the allowable secondary path estimation errors. In addition, simulations show that even when estimation errors occur the FF-FB ANC algorithm still outperforms the FF ANC algorithm.","PeriodicalId":341888,"journal":{"name":"2013 IEEE Workshop on Applications of Signal Processing to Audio and Acoustics","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115925803","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
A simple adaptive cardioid direction finding algorithm 一个简单的自适应心脏测向算法
Pub Date : 2013-10-01 DOI: 10.1109/WASPAA.2013.6701831
G. Elko, Jens Meyer
A simple adaptive cardioid direction-finder algorithm using signals from closely spaced omnidirectional microphones is described. One implementation utilizes a computationally simple constrained LMS adaptive filter with only 3-taps for the general 3D case and 2-taps for the 2D case. The solution adaptively finds the location of the single cardioid null that minimizes the output power of a generally 2D or 3D rotated cardioid.
介绍了一种利用全向传声器信号的简单自适应心型测向算法。一种实现利用计算简单的约束LMS自适应滤波器,一般3D情况下只有3个抽头,2D情况下只有2个抽头。该解决方案自适应地找到单个心线null的位置,使一般2D或3D旋转心线的输出功率最小。
{"title":"A simple adaptive cardioid direction finding algorithm","authors":"G. Elko, Jens Meyer","doi":"10.1109/WASPAA.2013.6701831","DOIUrl":"https://doi.org/10.1109/WASPAA.2013.6701831","url":null,"abstract":"A simple adaptive cardioid direction-finder algorithm using signals from closely spaced omnidirectional microphones is described. One implementation utilizes a computationally simple constrained LMS adaptive filter with only 3-taps for the general 3D case and 2-taps for the 2D case. The solution adaptively finds the location of the single cardioid null that minimizes the output power of a generally 2D or 3D rotated cardioid.","PeriodicalId":341888,"journal":{"name":"2013 IEEE Workshop on Applications of Signal Processing to Audio and Acoustics","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131396701","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
A sparse nonuniformly partitioned multidelay filter for acoustic echo cancellation 一种用于回声消除的稀疏非均匀分割多延迟滤波器
Pub Date : 2013-10-01 DOI: 10.1109/WASPAA.2013.6701832
D. Giacobello, Joshua Atkins
In this paper, we propose a formulation of the multidelay adaptive filter for acoustic echo cancellation by modeling the echo path using sparse nonuniform partitions. The nonuniform partitioning allows for a low algorithmic delay without sacrificing the high order of the adaptive filter. It also further improves upon the computational efficiency of the uniformly partitioned multidelay filter by leveraging larger FFT sizes for certain partitions. The sparsity constraint allows for the definition of active and inactive regions of the adaptive filter, providing a better estimate of the order of the filter. Simulation results are provided showing increased convergence speed with the same steady-state misalignment compared to traditional multidelay filtering with both uniform and nonuniform partitioning.
本文提出了一种利用稀疏非均匀分区对回声路径进行建模的多延迟自适应声回波消除滤波器的公式。非均匀划分允许低算法延迟而不牺牲自适应滤波器的高阶。它还通过对某些分区利用更大的FFT大小进一步提高了均匀分区多延迟滤波器的计算效率。稀疏性约束允许定义自适应滤波器的活动和非活动区域,从而更好地估计滤波器的顺序。仿真结果表明,与具有均匀和非均匀分划的传统多延迟滤波相比,在相同的稳态失调情况下,收敛速度有所提高。
{"title":"A sparse nonuniformly partitioned multidelay filter for acoustic echo cancellation","authors":"D. Giacobello, Joshua Atkins","doi":"10.1109/WASPAA.2013.6701832","DOIUrl":"https://doi.org/10.1109/WASPAA.2013.6701832","url":null,"abstract":"In this paper, we propose a formulation of the multidelay adaptive filter for acoustic echo cancellation by modeling the echo path using sparse nonuniform partitions. The nonuniform partitioning allows for a low algorithmic delay without sacrificing the high order of the adaptive filter. It also further improves upon the computational efficiency of the uniformly partitioned multidelay filter by leveraging larger FFT sizes for certain partitions. The sparsity constraint allows for the definition of active and inactive regions of the adaptive filter, providing a better estimate of the order of the filter. Simulation results are provided showing increased convergence speed with the same steady-state misalignment compared to traditional multidelay filtering with both uniform and nonuniform partitioning.","PeriodicalId":341888,"journal":{"name":"2013 IEEE Workshop on Applications of Signal Processing to Audio and Acoustics","volume":"2007 14","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132678016","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Geometrically Constrained TRINICON-based relative transfer function estimation in underdetermined scenarios 欠确定场景下基于几何约束trinicon的相对传递函数估计
Pub Date : 2013-10-01 DOI: 10.1109/WASPAA.2013.6701822
K. Reindl, S. M. Golan, Hendrik Barfuss, S. Gannot, Walter Kellermann
Speech extraction in a reverberant enclosure using a linearly-constrained minimum variance (LCMV) beamformer usually requires reliable estimates of the relative transfer functions (RTFs) of the desired source to all microphones. In this contribution, a geometrically constrained (GC)-TRINICON concept for RTF estimation is proposed. This approach is applicable in challenging multiple-speaker scenarios and in underdetermined situations, where the number of simultaneously active sources outnumbers the number of available microphone signals. As a most practically relevant and distinctive feature, this concept does not require any voice-activity-based control mechanism. It only requires coarse reference information on the target direction of arrival (DoA). The proposed GC-TRINICON method is compared to a recently proposed subspace method for RTF estimation relying on voice-activity control. Experimental results confirm the effectiveness of GC-TRINICON in realistic conditions.
使用线性约束最小方差波束形成器在混响腔中提取语音通常需要可靠地估计期望源到所有麦克风的相对传递函数(rtf)。在这篇贡献中,提出了用于RTF估计的几何约束(GC)-TRINICON概念。这种方法适用于具有挑战性的多扬声器场景和不确定的情况,在这种情况下,同时有效的信号源数量超过了可用的麦克风信号数量。作为最实际和最独特的功能,这个概念不需要任何基于语音活动的控制机制。它只需要目标到达方向(DoA)的粗略参考信息。提出的GC-TRINICON方法与最近提出的依赖于语音活动控制的RTF估计子空间方法进行了比较。实验结果证实了GC-TRINICON在现实条件下的有效性。
{"title":"Geometrically Constrained TRINICON-based relative transfer function estimation in underdetermined scenarios","authors":"K. Reindl, S. M. Golan, Hendrik Barfuss, S. Gannot, Walter Kellermann","doi":"10.1109/WASPAA.2013.6701822","DOIUrl":"https://doi.org/10.1109/WASPAA.2013.6701822","url":null,"abstract":"Speech extraction in a reverberant enclosure using a linearly-constrained minimum variance (LCMV) beamformer usually requires reliable estimates of the relative transfer functions (RTFs) of the desired source to all microphones. In this contribution, a geometrically constrained (GC)-TRINICON concept for RTF estimation is proposed. This approach is applicable in challenging multiple-speaker scenarios and in underdetermined situations, where the number of simultaneously active sources outnumbers the number of available microphone signals. As a most practically relevant and distinctive feature, this concept does not require any voice-activity-based control mechanism. It only requires coarse reference information on the target direction of arrival (DoA). The proposed GC-TRINICON method is compared to a recently proposed subspace method for RTF estimation relying on voice-activity control. Experimental results confirm the effectiveness of GC-TRINICON in realistic conditions.","PeriodicalId":341888,"journal":{"name":"2013 IEEE Workshop on Applications of Signal Processing to Audio and Acoustics","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116986240","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 31
Acoustic scene classification using sparse feature learning and event-based pooling 基于稀疏特征学习和事件池的声学场景分类
Pub Date : 2013-10-01 DOI: 10.1109/WASPAA.2013.6701893
Kyogu Lee, Ziwon Hyung, Juhan Nam
Recently unsupervised learning algorithms have been successfully used to represent data in many of machine recognition tasks. In particular, sparse feature learning algorithms have shown that they can not only discover meaningful structures from raw data but also outperform many hand-engineered features. In this paper, we apply the sparse feature learning approach to acoustic scene classification. We use a sparse restricted Boltzmann machine to capture manyfold local acoustic structures from audio data and represent the data in a high-dimensional sparse feature space given the learned structures. For scene classification, we summarize the local features by pooling over audio scene data. While the feature pooling is typically performed over uniformly divided segments, we suggest a new pooling method, which first detects audio events and then performs pooling only over detected events, considering the irregular occurrence of audio events in acoustic scene data. We evaluate the learned features on the IEEE AASP Challenge development set, comparing them with a baseline model using mel-frequency cepstral coefficients (MFCCs). The results show that learned features outperform MFCCs, event-based pooling achieves higher accuracy than uniform pooling and, furthermore, a combination of the two methods performs even better than either one used alone.
近年来,无监督学习算法已成功地用于许多机器识别任务中的数据表示。特别是,稀疏特征学习算法已经表明,它们不仅可以从原始数据中发现有意义的结构,而且还优于许多手工设计的特征。本文将稀疏特征学习方法应用于声学场景分类。我们使用稀疏限制玻尔兹曼机从音频数据中捕获多倍局部声学结构,并在给定学习结构的高维稀疏特征空间中表示数据。对于场景分类,我们通过对音频场景数据进行池化来总结局部特征。虽然特征池化通常是在均匀分割的片段上进行的,但我们提出了一种新的池化方法,该方法首先检测音频事件,然后仅在检测到的事件上进行池化,考虑到音频事件在声学场景数据中的不规则发生。我们在IEEE AASP挑战开发集上评估了学习到的特征,并将它们与使用mel-frequency倒谱系数(mfccc)的基线模型进行了比较。结果表明,学习特征优于mfc,基于事件的池化比均匀池化具有更高的精度,而且,两种方法的组合甚至比单独使用任何一种方法都更好。
{"title":"Acoustic scene classification using sparse feature learning and event-based pooling","authors":"Kyogu Lee, Ziwon Hyung, Juhan Nam","doi":"10.1109/WASPAA.2013.6701893","DOIUrl":"https://doi.org/10.1109/WASPAA.2013.6701893","url":null,"abstract":"Recently unsupervised learning algorithms have been successfully used to represent data in many of machine recognition tasks. In particular, sparse feature learning algorithms have shown that they can not only discover meaningful structures from raw data but also outperform many hand-engineered features. In this paper, we apply the sparse feature learning approach to acoustic scene classification. We use a sparse restricted Boltzmann machine to capture manyfold local acoustic structures from audio data and represent the data in a high-dimensional sparse feature space given the learned structures. For scene classification, we summarize the local features by pooling over audio scene data. While the feature pooling is typically performed over uniformly divided segments, we suggest a new pooling method, which first detects audio events and then performs pooling only over detected events, considering the irregular occurrence of audio events in acoustic scene data. We evaluate the learned features on the IEEE AASP Challenge development set, comparing them with a baseline model using mel-frequency cepstral coefficients (MFCCs). The results show that learned features outperform MFCCs, event-based pooling achieves higher accuracy than uniform pooling and, furthermore, a combination of the two methods performs even better than either one used alone.","PeriodicalId":341888,"journal":{"name":"2013 IEEE Workshop on Applications of Signal Processing to Audio and Acoustics","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123913954","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 29
Environment-aware ideal binary mask estimation using monaural cues 基于单信号的环境感知理想二值掩码估计
Pub Date : 2013-10-01 DOI: 10.1109/WASPAA.2013.6701821
T. May, T. Dau
We present a monaural approach to speech segregation that estimates the ideal binary mask (IBM) by combining amplitude modulation spectrogram (AMS) features, pitch-based features and speech presence probability (SPP) features derived from noise statistics. To maintain a high mask estimation accuracy in the presence of various background noises, the system employs environment-specific segregation models and automatically selects the appropriate model for a given input signal. Furthermore, instead of classifying each time-frequency (T-F) unit independently, the a posteriori probabilities of speech and noise presence are evaluated by considering adjacent T-F units. The proposed system achieves high classification accuracy.
我们提出了一种单耳的语音隔离方法,该方法通过结合调幅谱图(AMS)特征、基于音高的特征和从噪声统计中得出的语音存在概率(SPP)特征来估计理想二值掩码(IBM)。为了在各种背景噪声存在的情况下保持较高的掩模估计精度,该系统采用了特定于环境的分离模型,并为给定的输入信号自动选择合适的模型。此外,与独立分类每个时间-频率(T-F)单元不同,语音和噪声存在的后验概率通过考虑相邻的T-F单元来评估。该系统具有较高的分类准确率。
{"title":"Environment-aware ideal binary mask estimation using monaural cues","authors":"T. May, T. Dau","doi":"10.1109/WASPAA.2013.6701821","DOIUrl":"https://doi.org/10.1109/WASPAA.2013.6701821","url":null,"abstract":"We present a monaural approach to speech segregation that estimates the ideal binary mask (IBM) by combining amplitude modulation spectrogram (AMS) features, pitch-based features and speech presence probability (SPP) features derived from noise statistics. To maintain a high mask estimation accuracy in the presence of various background noises, the system employs environment-specific segregation models and automatically selects the appropriate model for a given input signal. Furthermore, instead of classifying each time-frequency (T-F) unit independently, the a posteriori probabilities of speech and noise presence are evaluated by considering adjacent T-F units. The proposed system achieves high classification accuracy.","PeriodicalId":341888,"journal":{"name":"2013 IEEE Workshop on Applications of Signal Processing to Audio and Acoustics","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128263190","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
Learning an intelligibility map of individual utterances 学习单个话语的可理解度图
Pub Date : 2013-10-01 DOI: 10.1109/WASPAA.2013.6701835
Michael I. Mandel
Predicting the intelligibility of noisy recordings is difficult and most current algorithms only aim to be correct on average across many recordings. This paper describes a listening test paradigm and associated analysis technique that can predict the intelligibility of a specific recording of a word in the presence of a specific noise instance. The analysis learns a map of the importance of each point in the recording's spectrogram to the overall intelligibility of the word when glimpsed through “bubbles” in many noise instances. By treating this as a classification problem, a linear classifier can be used to predict intelligibility and can be examined to determine the importance of spectral regions. This approach was tested on recordings of vowels and consonants. The important regions identified by the model in these tests agreed with those identified by a standard, non-predictive statistical test of independence and with the acoustic phonetics literature.
预测噪声录音的可理解性是困难的,目前大多数算法的目标只是在许多录音中平均正确。本文描述了一种听力测试范式和相关的分析技术,可以预测在特定噪声情况下特定单词记录的可理解性。当在许多噪音情况下透过“气泡”瞥见时,分析学习了记录频谱图中每个点对单词整体可理解性的重要性的地图。通过将其视为分类问题,可以使用线性分类器来预测可理解性,并可以检查以确定光谱区域的重要性。这种方法在元音和辅音的录音中进行了测试。在这些测试中,模型确定的重要区域与标准的、非预测性的独立性统计测试和声学语音学文献所确定的区域一致。
{"title":"Learning an intelligibility map of individual utterances","authors":"Michael I. Mandel","doi":"10.1109/WASPAA.2013.6701835","DOIUrl":"https://doi.org/10.1109/WASPAA.2013.6701835","url":null,"abstract":"Predicting the intelligibility of noisy recordings is difficult and most current algorithms only aim to be correct on average across many recordings. This paper describes a listening test paradigm and associated analysis technique that can predict the intelligibility of a specific recording of a word in the presence of a specific noise instance. The analysis learns a map of the importance of each point in the recording's spectrogram to the overall intelligibility of the word when glimpsed through “bubbles” in many noise instances. By treating this as a classification problem, a linear classifier can be used to predict intelligibility and can be examined to determine the importance of spectral regions. This approach was tested on recordings of vowels and consonants. The important regions identified by the model in these tests agreed with those identified by a standard, non-predictive statistical test of independence and with the acoustic phonetics literature.","PeriodicalId":341888,"journal":{"name":"2013 IEEE Workshop on Applications of Signal Processing to Audio and Acoustics","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130513224","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Speech understanding in noise provided by a simulated cochlear implant processor based on matching pursuit 基于匹配追踪的模拟人工耳蜗处理器提供噪声环境下的语音理解
Pub Date : 2013-10-01 DOI: 10.1109/WASPAA.2013.6701878
A. Kressner, C. Rozell
Speech reception is poor for cochlear implant recipients in listening environments with interfering noise. This study investigates the speech understanding provided in interfering noise by a coding strategy based on the sparse approximation algorithm matching pursuit (MP) and additionally proposes two modifications to the strategy. The levels of spectral information provided by the MP strategy and the modified MP strategy are compared to that of continuous interleaved sampling (CIS) and a strategy based on the ideal binary mask (IBM) using vocoded speech and the normalized covariance metric (NCM). We demonstrate objective intelligibility improvements in quiet, and total and partial objective intelligibility restoration in steady-state and fluctuating noise, respectively.
人工耳蜗受者在有噪声干扰的听力环境中,言语接收能力较差。本文研究了一种基于稀疏逼近算法匹配追踪(MP)的编码策略在噪声干扰下提供的语音理解,并对该策略提出了两种改进。将MP策略和改进的MP策略提供的频谱信息水平与连续交错采样(CIS)和基于理想二进制掩码(IBM)的策略进行了比较,该策略使用语音编码和归一化协方差度量(NCM)。我们分别展示了在安静条件下客观可理解性的改善,以及在稳态和波动噪声条件下全部和部分客观可理解性的恢复。
{"title":"Speech understanding in noise provided by a simulated cochlear implant processor based on matching pursuit","authors":"A. Kressner, C. Rozell","doi":"10.1109/WASPAA.2013.6701878","DOIUrl":"https://doi.org/10.1109/WASPAA.2013.6701878","url":null,"abstract":"Speech reception is poor for cochlear implant recipients in listening environments with interfering noise. This study investigates the speech understanding provided in interfering noise by a coding strategy based on the sparse approximation algorithm matching pursuit (MP) and additionally proposes two modifications to the strategy. The levels of spectral information provided by the MP strategy and the modified MP strategy are compared to that of continuous interleaved sampling (CIS) and a strategy based on the ideal binary mask (IBM) using vocoded speech and the normalized covariance metric (NCM). We demonstrate objective intelligibility improvements in quiet, and total and partial objective intelligibility restoration in steady-state and fluctuating noise, respectively.","PeriodicalId":341888,"journal":{"name":"2013 IEEE Workshop on Applications of Signal Processing to Audio and Acoustics","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130794662","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Large-scale audio feature extraction and SVM for acoustic scene classification 大规模音频特征提取与支持向量机声学场景分类
Pub Date : 2013-10-01 DOI: 10.1109/WASPAA.2013.6701857
Jürgen T. Geiger, Björn Schuller, G. Rigoll
This work describes a system for acoustic scene classification using large-scale audio feature extraction. It is our contribution to the Scene Classification track of the IEEE AASP Challenge on Detection and Classification of Acoustic Scenes and Events (D-CASE). The system classifies 30 second long recordings of 10 different acoustic scenes. From the highly variable recordings, a large number of spectral, cepstral, energy and voicing-related audio features are extracted. Using a sliding window approach, classification is performed on short windows. SVM are used to classify these short segments, and a majority voting scheme is employed to get a decision for longer recordings. On the official development set of the challenge, an accuracy of 73 % is achieved. SVM are compared with a nearest neighbour classifier and an approach called Latent Perceptual Indexing, whereby SVM achieve the best results. A feature analysis using the t-statistic shows that mainly Mel spectra are the most relevant features.
本文描述了一个基于大规模音频特征提取的声学场景分类系统。这是我们对IEEE AASP声学场景和事件的检测和分类挑战(D-CASE)的场景分类轨道的贡献。该系统对10个不同的声学场景的30秒长的录音进行分类。从高度可变的录音中,提取了大量的频谱、倒谱、能量和语音相关的音频特征。使用滑动窗口方法,对短窗口进行分类。使用支持向量机对这些较短的录音片段进行分类,并采用多数投票方案对较长的录音片段进行决策。在挑战的官方开发集上,准确率达到了73%。SVM与最近邻分类器和一种称为潜在感知索引的方法进行了比较,其中SVM获得了最佳结果。使用t统计量进行特征分析表明,Mel谱是最相关的特征。
{"title":"Large-scale audio feature extraction and SVM for acoustic scene classification","authors":"Jürgen T. Geiger, Björn Schuller, G. Rigoll","doi":"10.1109/WASPAA.2013.6701857","DOIUrl":"https://doi.org/10.1109/WASPAA.2013.6701857","url":null,"abstract":"This work describes a system for acoustic scene classification using large-scale audio feature extraction. It is our contribution to the Scene Classification track of the IEEE AASP Challenge on Detection and Classification of Acoustic Scenes and Events (D-CASE). The system classifies 30 second long recordings of 10 different acoustic scenes. From the highly variable recordings, a large number of spectral, cepstral, energy and voicing-related audio features are extracted. Using a sliding window approach, classification is performed on short windows. SVM are used to classify these short segments, and a majority voting scheme is employed to get a decision for longer recordings. On the official development set of the challenge, an accuracy of 73 % is achieved. SVM are compared with a nearest neighbour classifier and an approach called Latent Perceptual Indexing, whereby SVM achieve the best results. A feature analysis using the t-statistic shows that mainly Mel spectra are the most relevant features.","PeriodicalId":341888,"journal":{"name":"2013 IEEE Workshop on Applications of Signal Processing to Audio and Acoustics","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116699477","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 123
Speech enhancement for hearing instruments: Enabling communication in adverse conditions 助听器的语音增强:在不利条件下进行通信
Pub Date : 2013-10-01 DOI: 10.1109/WASPAA.2013.6701897
Rainer Martin
Hearing instruments are frequently used in notoriously difficult acoustic scenarios. Even for normal-hearing people ambient noise, reverberation and echoes often contribute to a degraded communication experience. The impact of these factors becomes significantly more prominent when participants suffer from a hearing loss. Nevertheless, hearing instruments are frequently used in these adverse conditions and must enable effortless communication. In this talk I will discuss challenges that are encountered in acoustic signal processing for hearing instruments. While many algorithms are motivated by the quest for a cocktail party processor and by the high-level paradigms of auditory scene analysis a careful design of statistical models and processing schemes is necessary to achieve the required performance in real world applications. Rather strict requirements result from the size of the device, the power budget, and the admissable processing latency. Starting with low-latency spectral analysis and synthesis systems for speech and music signals I will continue highlighting statistical estimation and smoothing techniques for the enhancement of noisy speech. The talk emphasizes the necessity to find a good balance between temporal and spectral resolution, processing latency, and statistical estimation errors. It concludes with single and multi-channel speech enhancement examples and an outlook towards opportunities which reside in the use of comprehensive speech processing models and distributed resources.
助听器经常用于非常困难的声学场景。即使对听力正常的人来说,环境噪音、混响和回声也常常会导致沟通体验的下降。当参与者患有听力损失时,这些因素的影响变得更加突出。尽管如此,助听器经常在这些不利的条件下使用,并且必须能够毫不费力地进行交流。在这次演讲中,我将讨论在助听器声学信号处理中遇到的挑战。虽然许多算法的动机是寻求鸡尾酒会处理器和听觉场景分析的高级范例,但为了在现实世界的应用中实现所需的性能,需要仔细设计统计模型和处理方案。设备的大小、功率预算和可接受的处理延迟导致了相当严格的要求。从语音和音乐信号的低延迟频谱分析和合成系统开始,我将继续强调用于增强噪声语音的统计估计和平滑技术。该演讲强调了在时间和光谱分辨率、处理延迟和统计估计误差之间找到良好平衡的必要性。最后给出了单通道和多通道语音增强的例子,并展望了使用综合语音处理模型和分布式资源的机会。
{"title":"Speech enhancement for hearing instruments: Enabling communication in adverse conditions","authors":"Rainer Martin","doi":"10.1109/WASPAA.2013.6701897","DOIUrl":"https://doi.org/10.1109/WASPAA.2013.6701897","url":null,"abstract":"Hearing instruments are frequently used in notoriously difficult acoustic scenarios. Even for normal-hearing people ambient noise, reverberation and echoes often contribute to a degraded communication experience. The impact of these factors becomes significantly more prominent when participants suffer from a hearing loss. Nevertheless, hearing instruments are frequently used in these adverse conditions and must enable effortless communication. In this talk I will discuss challenges that are encountered in acoustic signal processing for hearing instruments. While many algorithms are motivated by the quest for a cocktail party processor and by the high-level paradigms of auditory scene analysis a careful design of statistical models and processing schemes is necessary to achieve the required performance in real world applications. Rather strict requirements result from the size of the device, the power budget, and the admissable processing latency. Starting with low-latency spectral analysis and synthesis systems for speech and music signals I will continue highlighting statistical estimation and smoothing techniques for the enhancement of noisy speech. The talk emphasizes the necessity to find a good balance between temporal and spectral resolution, processing latency, and statistical estimation errors. It concludes with single and multi-channel speech enhancement examples and an outlook towards opportunities which reside in the use of comprehensive speech processing models and distributed resources.","PeriodicalId":341888,"journal":{"name":"2013 IEEE Workshop on Applications of Signal Processing to Audio and Acoustics","volume":"72 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124966146","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
2013 IEEE Workshop on Applications of Signal Processing to Audio and Acoustics
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1