首页 > 最新文献

Trends in Hearing最新文献

英文 中文
Automated Measurement of Speech Recognition, Reaction Time, and Speech Rate and Their Relation to Self-Reported Listening Effort for Normal-Hearing and Hearing-Impaired Listeners Using various Maskers. 使用各种掩码自动测量正常听力和听力受损听者的语音识别能力、反应时间和语速及其与自述听力努力的关系。
IF 2.6 2区 医学 Q1 AUDIOLOGY & SPEECH-LANGUAGE PATHOLOGY Pub Date : 2024-01-01 DOI: 10.1177/23312165241276435
Inga Holube, Stefan Taesler, Saskia Ibelings, Martin Hansen, Jasper Ooster

In speech audiometry, the speech-recognition threshold (SRT) is usually established by adjusting the signal-to-noise ratio (SNR) until 50% of the words or sentences are repeated correctly. However, these conditions are rarely encountered in everyday situations. Therefore, for a group of 15 young participants with normal hearing and a group of 12 older participants with hearing impairment, speech-recognition scores were determined at SRT and at four higher SNRs using several stationary and fluctuating maskers. Participants' verbal responses were recorded, and participants were asked to self-report their listening effort on a categorical scale (self-reported listening effort, SR-LE). The responses were analyzed using an Automatic Speech Recognizer (ASR) and compared to the results of a human examiner. An intraclass correlation coefficient of r = .993 for the agreement between their corresponding speech-recognition scores was observed. As expected, speech-recognition scores increased with increasing SNR and decreased with increasing SR-LE. However, differences between speech-recognition scores for fluctuating and stationary maskers were observed as a function of SNR, but not as a function of SR-LE. The verbal response time (VRT) and the response speech rate (RSR) of the listeners' responses were measured using an ASR. The participants with hearing impairment showed significantly lower RSRs and higher VRTs compared to the participants with normal hearing. These differences may be attributed to differences in age, hearing, or both. With increasing SR-LE, VRT increased and RSR decreased. The results show the possibility of deriving a behavioral measure, VRT, measured directly from participants' verbal responses during speech audiometry, as a proxy for SR-LE.

在言语测听中,通常通过调整信噪比(SNR)来确定言语识别阈值(SRT),直到 50%的单词或句子被正确重复为止。然而,这种情况在日常生活中很少遇到。因此,我们对一组 15 名听力正常的年轻参与者和一组 12 名听力受损的老年参与者进行了测试,在 SRT 和四种较高信噪比条件下,使用几种固定和波动掩蔽器测定了他们的语音识别得分。参与者的口头回答都被记录下来,并要求他们用分类量表(自评听力强度,SR-LE)自我报告听力强度。使用自动语音识别器(ASR)对这些回答进行分析,并将其与人工检查员的结果进行比较。结果表明,相应的语音识别得分之间的类内相关系数为 r = 0.993。正如预期的那样,语音识别得分随着信噪比的增加而增加,随着 SR-LE 的增加而减少。然而,在信噪比(SNR)的函数作用下,可以观察到波动型掩蔽者和静止型掩蔽者的语音识别得分之间存在差异,但在 SR-LE 的函数作用下则没有这种差异。使用 ASR 测量了听者的言语反应时间(VRT)和反应语速(RSR)。与听力正常者相比,听力受损者的 RSR 明显较低,而 VRT 则较高。这些差异可能是由于年龄、听力或两者的差异造成的。随着 SR-LE 的增加,VRT 增加,RSR 减少。研究结果表明,可以从参与者在言语测听过程中的言语反应直接得出行为测量值 VRT,作为 SR-LE 的替代值。
{"title":"Automated Measurement of Speech Recognition, Reaction Time, and Speech Rate and Their Relation to Self-Reported Listening Effort for Normal-Hearing and Hearing-Impaired Listeners Using various Maskers.","authors":"Inga Holube, Stefan Taesler, Saskia Ibelings, Martin Hansen, Jasper Ooster","doi":"10.1177/23312165241276435","DOIUrl":"10.1177/23312165241276435","url":null,"abstract":"<p><p>In speech audiometry, the speech-recognition threshold (SRT) is usually established by adjusting the signal-to-noise ratio (SNR) until 50% of the words or sentences are repeated correctly. However, these conditions are rarely encountered in everyday situations. Therefore, for a group of 15 young participants with normal hearing and a group of 12 older participants with hearing impairment, speech-recognition scores were determined at SRT and at four higher SNRs using several stationary and fluctuating maskers. Participants' verbal responses were recorded, and participants were asked to self-report their listening effort on a categorical scale (self-reported listening effort, SR-LE). The responses were analyzed using an Automatic Speech Recognizer (ASR) and compared to the results of a human examiner. An intraclass correlation coefficient of <i>r </i>= .993 for the agreement between their corresponding speech-recognition scores was observed. As expected, speech-recognition scores increased with increasing SNR and decreased with increasing SR-LE. However, differences between speech-recognition scores for fluctuating and stationary maskers were observed as a function of SNR, but not as a function of SR-LE. The verbal response time (VRT) and the response speech rate (RSR) of the listeners' responses were measured using an ASR. The participants with hearing impairment showed significantly lower RSRs and higher VRTs compared to the participants with normal hearing. These differences may be attributed to differences in age, hearing, or both. With increasing SR-LE, VRT increased and RSR decreased. The results show the possibility of deriving a behavioral measure, VRT, measured directly from participants' verbal responses during speech audiometry, as a proxy for SR-LE.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"28 ","pages":"23312165241276435"},"PeriodicalIF":2.6,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11421406/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142299020","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Editorial: Cochlear Implants and Music. 社论:人工耳蜗与音乐
IF 2.7 2区 医学 Q1 AUDIOLOGY & SPEECH-LANGUAGE PATHOLOGY Pub Date : 2024-01-01 DOI: 10.1177/23312165241231685
Deborah A Vickers, Brian C J Moore
{"title":"Editorial: Cochlear Implants and Music.","authors":"Deborah A Vickers, Brian C J Moore","doi":"10.1177/23312165241231685","DOIUrl":"10.1177/23312165241231685","url":null,"abstract":"","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"28 ","pages":"23312165241231685"},"PeriodicalIF":2.7,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10874149/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139742320","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Automated Speech Audiometry: Can It Work Using Open-Source Pre-Trained Kaldi-NL Automatic Speech Recognition? 自动语音测听:使用开源预训练的 Kaldi-NL 自动语音识别技术是否可行?
IF 2.7 2区 医学 Q1 AUDIOLOGY & SPEECH-LANGUAGE PATHOLOGY Pub Date : 2024-01-01 DOI: 10.1177/23312165241229057
Gloria Araiza-Illan, Luke Meyer, Khiet P Truong, Deniz Başkent

A practical speech audiometry tool is the digits-in-noise (DIN) test for hearing screening of populations of varying ages and hearing status. The test is usually conducted by a human supervisor (e.g., clinician), who scores the responses spoken by the listener, or online, where software scores the responses entered by the listener. The test has 24-digit triplets presented in an adaptive staircase procedure, resulting in a speech reception threshold (SRT). We propose an alternative automated DIN test setup that can evaluate spoken responses whilst conducted without a human supervisor, using the open-source automatic speech recognition toolkit, Kaldi-NL. Thirty self-reported normal-hearing Dutch adults (19-64 years) completed one DIN + Kaldi-NL test. Their spoken responses were recorded and used for evaluating the transcript of decoded responses by Kaldi-NL. Study 1 evaluated the Kaldi-NL performance through its word error rate (WER), percentage of summed decoding errors regarding only digits found in the transcript compared to the total number of digits present in the spoken responses. Average WER across participants was 5.0% (range 0-48%, SD = 8.8%), with average decoding errors in three triplets per participant. Study 2 analyzed the effect that triplets with decoding errors from Kaldi-NL had on the DIN test output (SRT), using bootstrapping simulations. Previous research indicated 0.70 dB as the typical within-subject SRT variability for normal-hearing adults. Study 2 showed that up to four triplets with decoding errors produce SRT variations within this range, suggesting that our proposed setup could be feasible for clinical applications.

噪声中数字(DIN)测试是一种实用的言语测听工具,用于对不同年龄和听力状况的人群进行听力筛查。该测试通常由人工监督员(如临床医生)或在线软件进行,人工监督员会对听者的回答进行评分,在线软件则会对听者输入的回答进行评分。该测试采用自适应阶梯程序呈现 24 位三连音,从而得出语音接收阈值 (SRT)。我们提出了另一种自动 DIN 测试设置,可以在没有人工监督的情况下,使用开源自动语音识别工具包 Kaldi-NL 评估口语回答。30 名自称听力正常的荷兰成年人(19-64 岁)完成了一次 DIN + Kaldi-NL 测试。他们的口语回答被录制下来,用于评估 Kaldi-NL 解码后的回答记录。研究 1 通过 Kaldi-NL 的单词错误率(WER)来评估 Kaldi-NL 的性能,WER 是指与口语回答中出现的数字总数相比,只涉及笔录中出现的数字的解码错误总和所占的百分比。参与者的平均 WER 为 5.0%(范围为 0-48%,SD = 8.8%),每位参与者平均在三个三连音中出现解码错误。研究 2 采用引导模拟法分析了 Kaldi-NL 解码错误的三连音对 DIN 测试输出(SRT)的影响。先前的研究表明,正常听力成人的典型受试者内 SRT 变异为 0.70 dB。研究 2 表明,多达四个三连音解码错误产生的 SRT 变异在此范围内,这表明我们建议的设置在临床应用中是可行的。
{"title":"Automated Speech Audiometry: Can It Work Using Open-Source Pre-Trained Kaldi-NL Automatic Speech Recognition?","authors":"Gloria Araiza-Illan, Luke Meyer, Khiet P Truong, Deniz Başkent","doi":"10.1177/23312165241229057","DOIUrl":"10.1177/23312165241229057","url":null,"abstract":"<p><p>A practical speech audiometry tool is the digits-in-noise (DIN) test for hearing screening of populations of varying ages and hearing status. The test is usually conducted by a human supervisor (e.g., clinician), who scores the responses spoken by the listener, or online, where software scores the responses entered by the listener. The test has 24-digit triplets presented in an adaptive staircase procedure, resulting in a speech reception threshold (SRT). We propose an alternative automated DIN test setup that can evaluate spoken responses whilst conducted without a human supervisor, using the open-source automatic speech recognition toolkit, Kaldi-NL. Thirty self-reported normal-hearing Dutch adults (19-64 years) completed one DIN + Kaldi-NL test. Their spoken responses were recorded and used for evaluating the transcript of decoded responses by Kaldi-NL. Study 1 evaluated the Kaldi-NL performance through its word error rate (WER), percentage of summed decoding errors regarding only digits found in the transcript compared to the total number of digits present in the spoken responses. Average WER across participants was 5.0% (range 0-48%, SD = 8.8%), with average decoding errors in three triplets per participant. Study 2 analyzed the effect that triplets with decoding errors from Kaldi-NL had on the DIN test output (SRT), using bootstrapping simulations. Previous research indicated 0.70 dB as the typical within-subject SRT variability for normal-hearing adults. Study 2 showed that up to four triplets with decoding errors produce SRT variations within this range, suggesting that our proposed setup could be feasible for clinical applications.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"28 ","pages":"23312165241229057"},"PeriodicalIF":2.7,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10943752/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140132882","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Review of Binaural Processing With Asymmetrical Hearing Outcomes in Patients With Bilateral Cochlear Implants. 双耳处理与双侧人工耳蜗患者不对称听力结果的回顾。
IF 2.7 2区 医学 Q1 AUDIOLOGY & SPEECH-LANGUAGE PATHOLOGY Pub Date : 2024-01-01 DOI: 10.1177/23312165241229880
Sean R Anderson, Emily Burg, Lukas Suveg, Ruth Y Litovsky

Bilateral cochlear implants (BiCIs) result in several benefits, including improvements in speech understanding in noise and sound source localization. However, the benefit bilateral implants provide among recipients varies considerably across individuals. Here we consider one of the reasons for this variability: difference in hearing function between the two ears, that is, interaural asymmetry. Thus far, investigations of interaural asymmetry have been highly specialized within various research areas. The goal of this review is to integrate these studies in one place, motivating future research in the area of interaural asymmetry. We first consider bottom-up processing, where binaural cues are represented using excitation-inhibition of signals from the left ear and right ear, varying with the location of the sound in space, and represented by the lateral superior olive in the auditory brainstem. We then consider top-down processing via predictive coding, which assumes that perception stems from expectations based on context and prior sensory experience, represented by cascading series of cortical circuits. An internal, perceptual model is maintained and updated in light of incoming sensory input. Together, we hope that this amalgamation of physiological, behavioral, and modeling studies will help bridge gaps in the field of binaural hearing and promote a clearer understanding of the implications of interaural asymmetry for future research on optimal patient interventions.

双侧人工耳蜗(BiCIs)可带来多种益处,包括改善噪音中的语音理解和声源定位。然而,双侧植入体给受助者带来的益处却因人而异。在此,我们将考虑造成这种差异的原因之一:双耳听力功能的差异,即耳间不对称。迄今为止,对耳间不对称的研究在不同的研究领域都非常专业。本综述的目的是将这些研究整合在一起,激励未来在耳间不对称领域的研究。我们首先考虑的是自下而上的处理过程,其中双耳线索是通过左耳和右耳信号的激发-抑制来表示的,随声音在空间中的位置而变化,并由听觉脑干的外侧上橄榄来表示。然后,我们考虑通过预测编码进行自上而下的处理,即假定知觉源于基于上下文和先前感官经验的预期,由一系列层叠的皮层电路表示。内部感知模型会根据输入的感官信息进行维护和更新。我们希望,将生理学、行为学和建模研究结合在一起,将有助于弥补双耳听力领域的不足,并促进人们更清楚地了解耳间不对称对未来最佳患者干预研究的影响。
{"title":"Review of Binaural Processing With Asymmetrical Hearing Outcomes in Patients With Bilateral Cochlear Implants.","authors":"Sean R Anderson, Emily Burg, Lukas Suveg, Ruth Y Litovsky","doi":"10.1177/23312165241229880","DOIUrl":"10.1177/23312165241229880","url":null,"abstract":"<p><p>Bilateral cochlear implants (BiCIs) result in several benefits, including improvements in speech understanding in noise and sound source localization. However, the benefit bilateral implants provide among recipients varies considerably across individuals. Here we consider one of the reasons for this variability: difference in hearing function between the two ears, that is, interaural asymmetry. Thus far, investigations of interaural asymmetry have been highly specialized within various research areas. The goal of this review is to integrate these studies in one place, motivating future research in the area of interaural asymmetry. We first consider bottom-up processing, where binaural cues are represented using excitation-inhibition of signals from the left ear and right ear, varying with the location of the sound in space, and represented by the lateral superior olive in the auditory brainstem. We then consider top-down processing via predictive coding, which assumes that perception stems from expectations based on context and prior sensory experience, represented by cascading series of cortical circuits. An internal, perceptual model is maintained and updated in light of incoming sensory input. Together, we hope that this amalgamation of physiological, behavioral, and modeling studies will help bridge gaps in the field of binaural hearing and promote a clearer understanding of the implications of interaural asymmetry for future research on optimal patient interventions.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"28 ","pages":"23312165241229880"},"PeriodicalIF":2.7,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10976506/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140307503","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Estimating Pitch Information From Simulated Cochlear Implant Signals With Deep Neural Networks. 利用深度神经网络从模拟人工耳蜗信号中估计音高信息
IF 2.6 2区 医学 Q1 AUDIOLOGY & SPEECH-LANGUAGE PATHOLOGY Pub Date : 2024-01-01 DOI: 10.1177/23312165241298606
Takanori Ashihara, Shigeto Furukawa, Makio Kashino

Cochlear implant (CI) users, even with substantial speech comprehension, generally have poor sensitivity to pitch information (or fundamental frequency, F0). This insensitivity is often attributed to limited spectral and temporal resolution in the CI signals. However, the pitch sensitivity markedly varies among individuals, and some users exhibit fairly good sensitivity. This indicates that the CI signal contains sufficient information about F0, and users' sensitivity is predominantly limited by other physiological conditions such as neuroplasticity or neural health. We estimated the upper limit of F0 information that a CI signal can convey by decoding F0 from simulated CI signals (multi-channel pulsatile signals) with a deep neural network model (referred to as the CI model). We varied the number of electrode channels and the pulse rate, which should respectively affect spectral and temporal resolutions of stimulus representations. The F0-estimation performance generally improved with increasing number of channels and pulse rate. For the sounds presented under quiet conditions, the model performance was at best comparable to that of a control waveform model, which received raw-waveform inputs. Under conditions in which background noise was imposed, the performance of the CI model generally degraded by a greater degree than that of the waveform model. The pulse rate had a particularly large effect on predicted performance. These observations indicate that the CI signal contains some information for predicting F0, which is particularly sufficient for targets under quiet conditions. The temporal resolution (represented as pulse rate) plays a critical role in pitch representation under noisy conditions.

人工耳蜗 (CI) 用户即使有很强的语音理解能力,一般对音高信息(或基频,F0)的敏感度也很低。这种不敏感通常归因于 CI 信号的频谱和时间分辨率有限。然而,不同个体的音调灵敏度存在明显差异,有些用户的灵敏度相当高。这表明 CI 信号包含足够的 F0 信息,而用户的灵敏度主要受到神经可塑性或神经健康等其他生理条件的限制。我们通过使用深度神经网络模型(简称 CI 模型)对模拟 CI 信号(多通道脉动信号)进行 F0 解码,从而估算出 CI 信号所能传达的 F0 信息上限。我们改变了电极通道的数量和脉冲频率,这将分别影响刺激表征的频谱和时间分辨率。随着通道数和脉冲频率的增加,F0 估算性能普遍提高。对于在安静条件下呈现的声音,模型性能最多只能与接收原始波形输入的对照波形模型相媲美。在有背景噪音的条件下,CI 模型的性能通常比波形模型的性能下降得更多。脉搏率对预测性能的影响尤其大。这些观察结果表明,CI 信号包含一些预测 F0 的信息,尤其是对安静条件下的目标而言,这些信息是足够的。时间分辨率(以脉搏率表示)在噪声条件下的音高表示中起着至关重要的作用。
{"title":"Estimating Pitch Information From Simulated Cochlear Implant Signals With Deep Neural Networks.","authors":"Takanori Ashihara, Shigeto Furukawa, Makio Kashino","doi":"10.1177/23312165241298606","DOIUrl":"10.1177/23312165241298606","url":null,"abstract":"<p><p>Cochlear implant (CI) users, even with substantial speech comprehension, generally have poor sensitivity to pitch information (or fundamental frequency, F0). This insensitivity is often attributed to limited spectral and temporal resolution in the CI signals. However, the pitch sensitivity markedly varies among individuals, and some users exhibit fairly good sensitivity. This indicates that the CI signal contains sufficient information about F0, and users' sensitivity is predominantly limited by other physiological conditions such as neuroplasticity or neural health. We estimated the upper limit of F0 information that a CI signal can convey by decoding F0 from simulated CI signals (multi-channel pulsatile signals) with a deep neural network model (referred to as the CI model). We varied the number of electrode channels and the pulse rate, which should respectively affect spectral and temporal resolutions of stimulus representations. The F0-estimation performance generally improved with increasing number of channels and pulse rate. For the sounds presented under quiet conditions, the model performance was at best comparable to that of a control waveform model, which received raw-waveform inputs. Under conditions in which background noise was imposed, the performance of the CI model generally degraded by a greater degree than that of the waveform model. The pulse rate had a particularly large effect on predicted performance. These observations indicate that the CI signal contains some information for predicting F0, which is particularly sufficient for targets under quiet conditions. The temporal resolution (represented as pulse rate) plays a critical role in pitch representation under noisy conditions.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"28 ","pages":"23312165241298606"},"PeriodicalIF":2.6,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11693851/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142683025","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The Effect of Collaborative Triadic Conversations in Noise on Decision-Making in a General-Knowledge Task. 噪声环境下三方协作对话对一般知识任务决策的影响。
IF 2.6 2区 医学 Q1 AUDIOLOGY & SPEECH-LANGUAGE PATHOLOGY Pub Date : 2024-01-01 DOI: 10.1177/23312165241305058
Ingvi Örnolfsson, Axel Ahrens, Torsten Dau, Tobias May

Collaboration is a key element of many communicative interactions. Analyzing the effect of collaborative interaction on subsequent decision-making tasks offers the potential to quantitatively evaluate criteria that are indicative of successful communication. While many studies have explored how collaboration aids decision-making, little is known about how communicative barriers, such as loud background noise or hearing impairment, affect this process. This study investigated how collaborative triadic conversations held in different background noise levels affected the decision-making of individual group members in a subsequent individual task. Thirty normal-hearing participants were recruited and organized into triads. First, each participant answered a series of binary general knowledge questions and provided a confidence rating along with each response. The questions were then discussed in triads in either loud (78 dB) or soft (48 dB) background noise. Participants then answered the same questions individually again. Three decision-making measures - stay/switch behavior, decision convergence, and voting strategy - were used to assess if and how participants adjusted their initial decisions after the conversations. The results revealed an interaction between initial confidence rating and noise level: participants were more likely to modify their decisions towards high-confidence prior decisions, and this effect was more pronounced when the conversations had taken place in loud noise. We speculate that this may be because low-confidence opinions are less likely to be voiced in noisy environments compared to high-confidence opinions. The findings demonstrate that decision-making tasks can be designed for conversation studies with groups of more than two participants, and that such tasks can be used to explore how communicative barriers impact subsequent decision-making of individual group members.

合作是许多交流互动的关键因素。分析协作互动对后续决策任务的影响,有可能对表明成功沟通的标准进行量化评估。虽然许多研究都探讨了协作如何帮助决策,但对交流障碍(如嘈杂的背景噪音或听力障碍)如何影响这一过程却知之甚少。本研究调查了在不同背景噪音水平下进行的三人协作对话如何影响小组成员在随后的个人任务中的决策。研究人员招募了 30 名听力正常的参与者,并将他们组成三人小组。首先,每位参与者回答一系列二进制常识问题,并在回答每个问题时给出一个信心评级。然后,三人小组在响亮(78 分贝)或柔和(48 分贝)的背景噪音中讨论这些问题。然后,参与者再次单独回答相同的问题。三个决策测量指标--停留/切换行为、决策趋同和投票策略--用于评估参与者在对话后是否以及如何调整他们的初始决策。结果表明,初始信心评级与噪音水平之间存在交互作用:参与者更有可能根据先前的高信心决策来修改他们的决策,而当对话在嘈杂的噪音中进行时,这种效果更加明显。我们推测,这可能是因为在嘈杂的环境中,低置信度的意见比高置信度的意见更不容易被表达出来。研究结果表明,决策任务可以设计为由两名以上参与者组成的对话研究,而且这种任务可以用来探索交流障碍如何影响小组成员个人的后续决策。
{"title":"The Effect of Collaborative Triadic Conversations in Noise on Decision-Making in a General-Knowledge Task.","authors":"Ingvi Örnolfsson, Axel Ahrens, Torsten Dau, Tobias May","doi":"10.1177/23312165241305058","DOIUrl":"10.1177/23312165241305058","url":null,"abstract":"<p><p>Collaboration is a key element of many communicative interactions. Analyzing the effect of collaborative interaction on subsequent decision-making tasks offers the potential to quantitatively evaluate criteria that are indicative of successful communication. While many studies have explored how collaboration aids decision-making, little is known about how communicative barriers, such as loud background noise or hearing impairment, affect this process. This study investigated how collaborative triadic conversations held in different background noise levels affected the decision-making of individual group members in a subsequent individual task. Thirty normal-hearing participants were recruited and organized into triads. First, each participant answered a series of binary general knowledge questions and provided a confidence rating along with each response. The questions were then discussed in triads in either loud (78 dB) or soft (48 dB) background noise. Participants then answered the same questions individually again. Three decision-making measures - stay/switch behavior, decision convergence, and voting strategy - were used to assess if and how participants adjusted their initial decisions after the conversations. The results revealed an interaction between initial confidence rating and noise level: participants were more likely to modify their decisions towards high-confidence prior decisions, and this effect was more pronounced when the conversations had taken place in loud noise. We speculate that this may be because low-confidence opinions are less likely to be voiced in noisy environments compared to high-confidence opinions. The findings demonstrate that decision-making tasks can be designed for conversation studies with groups of more than two participants, and that such tasks can be used to explore how communicative barriers impact subsequent decision-making of individual group members.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"28 ","pages":"23312165241305058"},"PeriodicalIF":2.6,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11639005/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142819739","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Sound Localization in Single-Sided Deafness; Outcomes of a Randomized Controlled Trial on the Comparison Between Cochlear Implantation, Bone Conduction Devices, and Contralateral Routing of Signals Hearing Aids. 单侧耳聋的声音定位;关于人工耳蜗植入、骨传导设备和信号对侧路由助听器之间比较的随机对照试验结果。
IF 2.6 2区 医学 Q1 AUDIOLOGY & SPEECH-LANGUAGE PATHOLOGY Pub Date : 2024-01-01 DOI: 10.1177/23312165241287092
Jan A A van Heteren, Hanneke D van Oorschot, Anne W Wendrich, Jeroen P M Peters, Koenraad S Rhebergen, Wilko Grolman, Robert J Stokroos, Adriana L Smit

There is currently a lack of prospective studies comparing multiple treatment options for single-sided deafness (SSD) in terms of long-term sound localization outcomes. This randomized controlled trial (RCT) aims to compare the objective and subjective sound localization abilities of SSD patients treated with a cochlear implant (CI), a bone conduction device (BCD), a contralateral routing of signals (CROS) hearing aid, or no treatment after two years of follow-up. About 120 eligible patients were randomized to cochlear implantation or to a trial period with first a BCD on a headband, then a CROS (or vice versa). After the trial periods, participants opted for a surgically implanted BCD, a CROS, or no treatment. Sound localization accuracy (in three configurations, calculated as percentage correct and root-mean squared error in degrees) and subjective spatial hearing (subscale of the Speech, Spatial and Qualities of hearing (SSQ) questionnaire) were assessed at baseline and after 24 months of follow-up. At the start of follow-up, 28 participants were implanted with a CI, 25 with a BCD, 34 chose a CROS, and 26 opted for no treatment. Participants in the CI group showed better sound localization accuracy and subjective spatial hearing compared to participants in the BCD, CROS, and no-treatment groups at 24 months. Participants in the CI and CROS groups showed improved subjective spatial hearing at 24 months compared to baseline. To conclude, CI outperformed the BCD, CROS, and no-treatment groups in terms of sound localization accuracy and subjective spatial hearing in SSD patients. TRIAL REGISTRATION Netherlands Trial Register (https://onderzoekmetmensen.nl): NL4457, CINGLE trial.

目前缺乏前瞻性研究对单侧耳聋(SSD)多种治疗方案的长期声音定位效果进行比较。这项随机对照试验(RCT)旨在比较单侧耳聋患者在接受人工耳蜗(CI)、骨传导设备(BCD)、对侧信号路由(CROS)助听器或不接受治疗两年后的客观和主观声音定位能力。约 120 名符合条件的患者被随机分配到人工耳蜗植入或试用阶段,先是头戴 BCD,然后是 CROS(反之亦然)。试验期结束后,参与者选择手术植入 BCD、CROS 或不进行治疗。在基线和 24 个月的随访后,对声音定位的准确性(三种配置,以正确率和均方根误差(度)计算)和主观空间听力(言语、空间和听力质量(SSQ)问卷的分量表)进行了评估。在随访开始时,28 名参与者植入了 CI,25 名植入了 BCD,34 名选择了 CROS,26 名选择了不做任何治疗。与 BCD、CROS 和未接受治疗组的参与者相比,CI 组的参与者在 24 个月时表现出了更好的声音定位准确性和主观空间听力。与基线相比,CI 组和 CROS 组患者在 24 个月时的主观空间听力有所改善。总之,就 SSD 患者的声音定位准确性和主观空间听力而言,CI 组优于 BCD、CROS 和无治疗组。试验登记 荷兰试验登记(https://onderzoekmetmensen.nl):NL4457,CINGLE 试验。
{"title":"Sound Localization in Single-Sided Deafness; Outcomes of a Randomized Controlled Trial on the Comparison Between Cochlear Implantation, Bone Conduction Devices, and Contralateral Routing of Signals Hearing Aids.","authors":"Jan A A van Heteren, Hanneke D van Oorschot, Anne W Wendrich, Jeroen P M Peters, Koenraad S Rhebergen, Wilko Grolman, Robert J Stokroos, Adriana L Smit","doi":"10.1177/23312165241287092","DOIUrl":"10.1177/23312165241287092","url":null,"abstract":"<p><p>There is currently a lack of prospective studies comparing multiple treatment options for single-sided deafness (SSD) in terms of long-term sound localization outcomes. This randomized controlled trial (RCT) aims to compare the objective and subjective sound localization abilities of SSD patients treated with a cochlear implant (CI), a bone conduction device (BCD), a contralateral routing of signals (CROS) hearing aid, or no treatment after two years of follow-up. About 120 eligible patients were randomized to cochlear implantation or to a trial period with first a BCD on a headband, then a CROS (or vice versa). After the trial periods, participants opted for a surgically implanted BCD, a CROS, or no treatment. Sound localization accuracy (in three configurations, calculated as percentage correct and root-mean squared error in degrees) and subjective spatial hearing (subscale of the Speech, Spatial and Qualities of hearing (SSQ) questionnaire) were assessed at baseline and after 24 months of follow-up. At the start of follow-up, 28 participants were implanted with a CI, 25 with a BCD, 34 chose a CROS, and 26 opted for no treatment. Participants in the CI group showed better sound localization accuracy and subjective spatial hearing compared to participants in the BCD, CROS, and no-treatment groups at 24 months. Participants in the CI and CROS groups showed improved subjective spatial hearing at 24 months compared to baseline. To conclude, CI outperformed the BCD, CROS, and no-treatment groups in terms of sound localization accuracy and subjective spatial hearing in SSD patients. <b>TRIAL REGISTRATION</b> Netherlands Trial Register (https://onderzoekmetmensen.nl): NL4457, <i>CINGLE</i> trial.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"28 ","pages":"23312165241287092"},"PeriodicalIF":2.6,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11526308/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142523412","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Processing of Visual Speech Cues in Speech-in-Noise Comprehension Depends on Working Memory Capacity and Enhances Neural Speech Tracking in Older Adults With Hearing Impairment. 在噪声语音理解中处理视觉语音线索取决于工作记忆能力并增强听力受损老年人的神经语音跟踪能力
IF 2.6 2区 医学 Q1 AUDIOLOGY & SPEECH-LANGUAGE PATHOLOGY Pub Date : 2024-01-01 DOI: 10.1177/23312165241287622
Vanessa Frei, Raffael Schmitt, Martin Meyer, Nathalie Giroud

Comprehending speech in noise (SiN) poses a challenge for older hearing-impaired listeners, requiring auditory and working memory resources. Visual speech cues provide additional sensory information supporting speech understanding, while the extent of such visual benefit is characterized by large variability, which might be accounted for by individual differences in working memory capacity (WMC). In the current study, we investigated behavioral and neurofunctional (i.e., neural speech tracking) correlates of auditory and audio-visual speech comprehension in babble noise and the associations with WMC. Healthy older adults with hearing impairment quantified by pure-tone hearing loss (threshold average: 31.85-57 dB, N = 67) listened to sentences in babble noise in audio-only, visual-only and audio-visual speech modality and performed a pattern matching and a comprehension task, while electroencephalography (EEG) was recorded. Behaviorally, no significant difference in task performance was observed across modalities. However, we did find a significant association between individual working memory capacity and task performance, suggesting a more complex interplay between audio-visual speech cues, working memory capacity and real-world listening tasks. Furthermore, we found that the visual speech presentation was accompanied by increased cortical tracking of the speech envelope, particularly in a right-hemispheric auditory topographical cluster. Post-hoc, we investigated the potential relationships between the behavioral performance and neural speech tracking but were not able to establish a significant association. Overall, our results show an increase in neurofunctional correlates of speech associated with congruent visual speech cues, specifically in a right auditory cluster, suggesting multisensory integration.

理解噪音中的语音(SiN)对老年听障者来说是一项挑战,需要听觉和工作记忆资源。视觉语音提示为语音理解提供了额外的感官信息支持,而这种视觉益处的程度存在很大的差异,这可能是工作记忆能力(WMC)的个体差异造成的。在本研究中,我们调查了咿呀噪音中听觉和视听言语理解的行为和神经功能(即神经言语跟踪)相关性以及与工作记忆能力的关联。通过纯音听力损失(阈值平均值:31.85-57 dB,N = 67)量化听力损伤的健康老年人在咿呀噪音中聆听纯音频、纯视觉和视听语音模式的句子,并完成模式匹配和理解任务,同时记录脑电图(EEG)。从行为上看,不同模式下的任务表现没有明显差异。不过,我们确实发现个人工作记忆能力与任务表现之间存在显著关联,这表明视听语音线索、工作记忆能力和真实世界听力任务之间存在更为复杂的相互作用。此外,我们还发现,在视觉语音呈现的同时,大脑皮层对语音包络线的追踪能力也有所增强,尤其是在右半球听觉地形集群中。事后,我们研究了行为表现与神经语音跟踪之间的潜在关系,但未能建立显著的关联。总之,我们的研究结果表明,与一致的视觉语音线索相关的语音神经功能相关性增加,特别是在右听觉集群中,这表明存在多感官整合。
{"title":"Processing of Visual Speech Cues in Speech-in-Noise Comprehension Depends on Working Memory Capacity and Enhances Neural Speech Tracking in Older Adults With Hearing Impairment.","authors":"Vanessa Frei, Raffael Schmitt, Martin Meyer, Nathalie Giroud","doi":"10.1177/23312165241287622","DOIUrl":"10.1177/23312165241287622","url":null,"abstract":"<p><p>Comprehending speech in noise (SiN) poses a challenge for older hearing-impaired listeners, requiring auditory and working memory resources. Visual speech cues provide additional sensory information supporting speech understanding, while the extent of such visual benefit is characterized by large variability, which might be accounted for by individual differences in working memory capacity (WMC). In the current study, we investigated behavioral and neurofunctional (i.e., neural speech tracking) correlates of auditory and audio-visual speech comprehension in babble noise and the associations with WMC. Healthy older adults with hearing impairment quantified by pure-tone hearing loss (threshold average: 31.85-57 dB, <i>N</i> = 67) listened to sentences in babble noise in audio-only, visual-only and audio-visual speech modality and performed a pattern matching and a comprehension task, while electroencephalography (EEG) was recorded. Behaviorally, no significant difference in task performance was observed across modalities. However, we did find a significant association between individual working memory capacity and task performance, suggesting a more complex interplay between audio-visual speech cues, working memory capacity and real-world listening tasks. Furthermore, we found that the visual speech presentation was accompanied by increased cortical tracking of the speech envelope, particularly in a right-hemispheric auditory topographical cluster. Post-hoc, we investigated the potential relationships between the behavioral performance and neural speech tracking but were not able to establish a significant association. Overall, our results show an increase in neurofunctional correlates of speech associated with congruent visual speech cues, specifically in a right auditory cluster, suggesting multisensory integration.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"28 ","pages":"23312165241287622"},"PeriodicalIF":2.6,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11520018/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142511002","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Global, Regional, and National Burdens of Hearing Loss for Children and Adolescents from 1990 to 2019: A Trend Analysis. 1990 年至 2019 年全球、地区和国家的儿童和青少年听力损失负担:趋势分析。
IF 2.6 2区 医学 Q1 AUDIOLOGY & SPEECH-LANGUAGE PATHOLOGY Pub Date : 2024-01-01 DOI: 10.1177/23312165241273391
Kan Chen, Bo Yang, Xiaoyan Yue, He Mi, Jianjun Leng, Lujie Li, Haoyu Wang, Yaxin Lai

This study presents a comprehensive analysis of global, regional, and national trends in the burden of hearing loss (HL) among children and adolescents from 1990 to 2019, using data from the Global Burden of Disease study. Over this period, there was a general decline in HL prevalence and years lived with disability (YLDs) globally, with average annual percentage changes (AAPCs) of -0.03% (95% uncertainty interval [UI], -0.04% to -0.01%; p = 0.001) and -0.23% (95% UI, -0.25% to -0.20%; p < 0.001). Males exhibited higher rates of HL prevalence and YLDs than females. Mild and moderate HL were the most common categories across all age groups, but the highest proportion of YLDs was associated with profound HL [22.23% (95% UI, 8.63%-57.53%)]. Among females aged 15-19 years, the prevalence and YLD rates for moderate HL rose, with AAPCs of 0.14% (95% UI, 0.06%-0.22%; p = 0.001) and 0.13% (95% UI, 0.08%-0.18%; p < 0.001). This increase is primarily attributed to age-related and other HL (such as environmental, lifestyle factors, and occupational noise exposure) and otitis media, highlighting the need for targeted research and interventions for this demographic. Southeast Asia and Western Sub-Saharan Africa bore the heaviest HL burden, while High-income North America showed lower HL prevalence and YLD rates but a slight increasing trend in recent years, with AAPCs of 0.13% (95% UI, 0.1%-0.16%; p < 0.001) and 0.08% (95% UI, 0.04% to 0.12%; p < 0.001). Additionally, the analysis revealed a significant negative correlation between sociodemographic index (SDI) and both HL prevalence (r = -0.74; p < 0.001) and YLD (r = -0.76; p < 0.001) rates. However, the changes in HL trends were not significantly correlated with SDI, suggesting that factors beyond economic development, such as policies and cultural practices, also affect HL. Despite the overall optimistic trend, this study emphasizes the continued need to focus on specific high-risk groups and regions to further reduce the HL burden and enhance the quality of life for affected children and adolescents.

本研究利用全球疾病负担研究(Global Burden of Disease)的数据,对 1990 年至 2019 年期间全球、地区和国家的儿童和青少年听力损失(HL)负担趋势进行了全面分析。在此期间,全球 HL 患病率和残疾生活年数 (YLD) 普遍下降,年均百分比变化 (AAPC) 为 -0.03%(95% 不确定区间 [UI],-0.04% 至 -0.01%;p = 0.001)和 -0.23%(95% UI,-0.25% 至 -0.20%;p p = 0.001)和 0.13%(95% UI,0.08% 至 0.18%;p p r = -0.74;p r = -0.76;p p r = 0.001)。
{"title":"Global, Regional, and National Burdens of Hearing Loss for Children and Adolescents from 1990 to 2019: A Trend Analysis.","authors":"Kan Chen, Bo Yang, Xiaoyan Yue, He Mi, Jianjun Leng, Lujie Li, Haoyu Wang, Yaxin Lai","doi":"10.1177/23312165241273391","DOIUrl":"10.1177/23312165241273391","url":null,"abstract":"<p><p>This study presents a comprehensive analysis of global, regional, and national trends in the burden of hearing loss (HL) among children and adolescents from 1990 to 2019, using data from the Global Burden of Disease study. Over this period, there was a general decline in HL prevalence and years lived with disability (YLDs) globally, with average annual percentage changes (AAPCs) of -0.03% (95% uncertainty interval [UI], -0.04% to -0.01%; <i>p</i> = 0.001) and -0.23% (95% UI, -0.25% to -0.20%; <i>p</i> < 0.001). Males exhibited higher rates of HL prevalence and YLDs than females. Mild and moderate HL were the most common categories across all age groups, but the highest proportion of YLDs was associated with profound HL [22.23% (95% UI, 8.63%-57.53%)]. Among females aged 15-19 years, the prevalence and YLD rates for moderate HL rose, with AAPCs of 0.14% (95% UI, 0.06%-0.22%; <i>p</i> = 0.001) and 0.13% (95% UI, 0.08%-0.18%; <i>p</i> < 0.001). This increase is primarily attributed to age-related and other HL (such as environmental, lifestyle factors, and occupational noise exposure) and otitis media, highlighting the need for targeted research and interventions for this demographic. Southeast Asia and Western Sub-Saharan Africa bore the heaviest HL burden, while High-income North America showed lower HL prevalence and YLD rates but a slight increasing trend in recent years, with AAPCs of 0.13% (95% UI, 0.1%-0.16%; <i>p</i> < 0.001) and 0.08% (95% UI, 0.04% to 0.12%; <i>p</i> < 0.001). Additionally, the analysis revealed a significant negative correlation between sociodemographic index (SDI) and both HL prevalence (<i>r</i> = -0.74; <i>p</i> < 0.001) and YLD (<i>r</i> = -0.76; <i>p</i> < 0.001) rates. However, the changes in HL trends were not significantly correlated with SDI, suggesting that factors beyond economic development, such as policies and cultural practices, also affect HL. Despite the overall optimistic trend, this study emphasizes the continued need to focus on specific high-risk groups and regions to further reduce the HL burden and enhance the quality of life for affected children and adolescents.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"28 ","pages":"23312165241273391"},"PeriodicalIF":2.6,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11342320/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142019246","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Auditory Spatial Bisection of Blind and Normally Sighted Individuals in Free Field and Virtual Acoustics. 盲人和正常视力者在自由声场和虚拟声学中的听觉空间分辨。
IF 2.7 2区 医学 Q1 AUDIOLOGY & SPEECH-LANGUAGE PATHOLOGY Pub Date : 2024-01-01 DOI: 10.1177/23312165241230947
Stefanie Goicke, Florian Denk, Tim Jürgens

Sound localization is an important ability in everyday life. This study investigates the influence of vision and presentation mode on auditory spatial bisection performance. Subjects were asked to identify the smaller perceived distance between three consecutive stimuli that were either presented via loudspeakers (free field) or via headphones after convolution with generic head-related impulse responses (binaural reproduction). Thirteen azimuthal sound incidence angles on a circular arc segment of ±24° at a radius of 3 m were included in three regions of space (front, rear, and laterally left). Twenty normally sighted (measured both sighted and blindfolded) and eight blind persons participated. Results showed no significant differences with respect to visual condition, but strong effects of sound direction and presentation mode. Psychometric functions were steepest in frontal space and indicated median spatial bisection thresholds of 11°-14°. Thresholds increased significantly in rear (11°-17°) and laterally left (20°-28°) space in free field. Individual pinna and torso cues, as available only in free field presentation, improved the performance of all participants compared to binaural reproduction. Especially in rear space, auditory spatial bisection thresholds were three to four times higher (i.e., poorer) using binaural reproduction than in free field. The results underline the importance of individual auditory spatial cues for spatial bisection, irrespective of access to vision, which indicates that vision may not be strictly necessary to calibrate allocentric spatial hearing.

声音定位是日常生活中的一项重要能力。本研究探讨了视觉和呈现模式对听觉空间分割能力的影响。受试者被要求识别三个连续刺激物之间较小的感知距离,这三个刺激物要么通过扬声器(自由声场)呈现,要么通过耳机与一般头部相关脉冲响应卷积后呈现(双耳再现)。在半径为 3 米的±24°圆弧段上的 13 个方位角声音入射角被包含在三个空间区域(前方、后方和左侧)。20 名视力正常者(同时测量视力和蒙眼)和 8 名盲人参加了测量。结果表明,视觉条件没有明显差异,但声音方向和呈现方式有很大影响。心理测量函数在前方空间最陡峭,显示的空间分隔阈值中值为 11°-14°。在自由场中,后方(11°-17°)和左侧(20°-28°)空间的阈值明显增加。与双耳再现相比,只有在自由声场中才有的个别耳廓和躯干线索提高了所有参与者的成绩。特别是在后方空间,使用双耳再现时的听觉空间分隔阈值是自由声场时的三到四倍(即较差)。这些结果凸显了个体听觉空间线索对空间分隔的重要性,而与视觉无关,这表明视觉可能并非校准分配中心空间听觉的严格必要条件。
{"title":"Auditory Spatial Bisection of Blind and Normally Sighted Individuals in Free Field and Virtual Acoustics.","authors":"Stefanie Goicke, Florian Denk, Tim Jürgens","doi":"10.1177/23312165241230947","DOIUrl":"10.1177/23312165241230947","url":null,"abstract":"<p><p>Sound localization is an important ability in everyday life. This study investigates the influence of vision and presentation mode on auditory spatial bisection performance. Subjects were asked to identify the smaller perceived distance between three consecutive stimuli that were either presented via loudspeakers (free field) or via headphones after convolution with generic head-related impulse responses (binaural reproduction). Thirteen azimuthal sound incidence angles on a circular arc segment of ±24° at a radius of 3 m were included in three regions of space (front, rear, and laterally left). Twenty normally sighted (measured both sighted and blindfolded) and eight blind persons participated. Results showed no significant differences with respect to visual condition, but strong effects of sound direction and presentation mode. Psychometric functions were steepest in frontal space and indicated median spatial bisection thresholds of 11°-14°. Thresholds increased significantly in rear (11°-17°) and laterally left (20°-28°) space in free field. Individual pinna and torso cues, as available only in free field presentation, improved the performance of all participants compared to binaural reproduction. Especially in rear space, auditory spatial bisection thresholds were three to four times higher (i.e., poorer) using binaural reproduction than in free field. The results underline the importance of individual auditory spatial cues for spatial bisection, irrespective of access to vision, which indicates that vision may not be strictly necessary to calibrate allocentric spatial hearing.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"28 ","pages":"23312165241230947"},"PeriodicalIF":2.7,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10874137/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139742319","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Trends in Hearing
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1