Pub Date : 2024-01-01DOI: 10.1177/23312165241275895
Yael Zaltz
Auditory training can lead to notable enhancements in specific tasks, but whether these improvements generalize to untrained tasks like speech-in-noise (SIN) recognition remains uncertain. This study examined how training conditions affect generalization. Fifty-five young adults were divided into "Trained-in-Quiet" (n = 15), "Trained-in-Noise" (n = 20), and "Control" (n = 20) groups. Participants completed two sessions. The first session involved an assessment of SIN recognition and voice discrimination (VD) with word or sentence stimuli, employing combined fundamental frequency (F0) + formant frequencies voice cues. Subsequently, only the trained groups proceeded to an interleaved training phase, encompassing six VD blocks with sentence stimuli, utilizing either F0-only or formant-only cues. The second session replicated the interleaved training for the trained groups, followed by a second assessment conducted by all three groups, identical to the first session. Results showed significant improvements in the trained task regardless of training conditions. However, VD training with a single cue did not enhance VD with both cues beyond control group improvements, suggesting limited generalization. Notably, the Trained-in-Noise group exhibited the most significant SIN recognition improvements posttraining, implying generalization across tasks that share similar acoustic conditions. Overall, findings suggest training conditions impact generalization by influencing processing levels associated with the trained task. Training in noisy conditions may prompt higher auditory and/or cognitive processing than training in quiet, potentially extending skills to tasks involving challenging listening conditions, such as SIN recognition. These insights hold significant theoretical and clinical implications, potentially advancing the development of effective auditory training protocols.
听觉训练能显著提高特定任务的能力,但这些能力是否能推广到噪声语音识别(SIN)等未经训练的任务中,目前仍不确定。本研究考察了训练条件对泛化的影响。55 名年轻人被分为 "安静训练 "组(15 人)、"噪声训练 "组(20 人)和 "对照 "组(20 人)。参与者完成两个环节。第一个环节是评估单词或句子刺激下的 SIN 识别能力和语音辨别能力(VD),采用基频 (F0) + 共振频率相结合的语音提示。随后,只有接受过训练的小组才进入交错训练阶段,包括六个句子刺激的 VD 块,使用纯 F0 或纯声母线索。第二阶段重复了受训组的交错训练,然后由所有三组进行第二次评估,评估内容与第一阶段相同。结果表明,无论训练条件如何,受训任务都有明显改善。然而,使用单一线索的 VD 训练并没有在对照组的基础上提高使用两个线索的 VD,这表明其普遍性有限。值得注意的是,"噪音训练 "组在训练后的 SIN 识别能力有了最显著的提高,这意味着在具有相似声学条件的任务中也具有普遍性。总之,研究结果表明,训练条件通过影响与训练任务相关的处理水平来影响泛化。与安静环境下的训练相比,嘈杂环境下的训练可能会促进更高的听觉和/或认知处理能力,从而有可能将技能扩展到具有挑战性听力条件的任务中,如 SIN 识别。这些见解具有重要的理论和临床意义,有可能推动有效听觉训练方案的开发。
{"title":"The Impact of Trained Conditions on the Generalization of Learning Gains Following Voice Discrimination Training.","authors":"Yael Zaltz","doi":"10.1177/23312165241275895","DOIUrl":"10.1177/23312165241275895","url":null,"abstract":"<p><p>Auditory training can lead to notable enhancements in specific tasks, but whether these improvements generalize to untrained tasks like speech-in-noise (SIN) recognition remains uncertain. This study examined how training conditions affect generalization. Fifty-five young adults were divided into \"Trained-in-Quiet\" (<i>n</i> = 15), \"Trained-in-Noise\" (<i>n</i> = 20), and \"Control\" (<i>n</i> = 20) groups. Participants completed two sessions. The first session involved an assessment of SIN recognition and voice discrimination (VD) with word or sentence stimuli, employing combined fundamental frequency (F0) + formant frequencies voice cues. Subsequently, only the trained groups proceeded to an interleaved training phase, encompassing six VD blocks with sentence stimuli, utilizing either F0-only or formant-only cues. The second session replicated the interleaved training for the trained groups, followed by a second assessment conducted by all three groups, identical to the first session. Results showed significant improvements in the trained task regardless of training conditions. However, VD training with a single cue did not enhance VD with both cues beyond control group improvements, suggesting limited generalization. Notably, the Trained-in-Noise group exhibited the most significant SIN recognition improvements posttraining, implying generalization across tasks that share similar acoustic conditions. Overall, findings suggest training conditions impact generalization by influencing processing levels associated with the trained task. Training in noisy conditions may prompt higher auditory and/or cognitive processing than training in quiet, potentially extending skills to tasks involving challenging listening conditions, such as SIN recognition. These insights hold significant theoretical and clinical implications, potentially advancing the development of effective auditory training protocols.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"28 ","pages":"23312165241275895"},"PeriodicalIF":2.6,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11367600/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142113727","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-01DOI: 10.1177/23312165241259704
Maaike Van Eeckhoutte, Bettina Skjold Jasper, Erik Finn Kjærbøl, David Harbo Jordell, Torsten Dau
The use of in-situ audiometry for hearing aid fitting is appealing due to its reduced resource and equipment requirements compared to standard approaches employing conventional audiometry alongside real-ear measures. However, its validity has been a subject of debate, as previous studies noted differences between hearing thresholds measured using conventional and in-situ audiometry. The differences were particularly notable for open-fit hearing aids, attributed to low-frequency leakage caused by the vent. Here, in-situ audiometry was investigated for six receiver-in-canal hearing aids from different manufacturers through three experiments. In Experiment I, the hearing aid gain was measured to investigate whether corrections were implemented to the prescribed target gain. In Experiment II, the in-situ stimuli were recorded to investigate if corrections were directly incorporated to the delivered in-situ stimulus. Finally, in Experiment III, hearing thresholds using in-situ and conventional audiometry were measured with real patients wearing open-fit hearing aids. Results indicated that (1) the hearing aid gain remained unaffected when measured with in-situ or conventional audiometry for all open-fit measurements, (2) the in-situ stimuli were adjusted for up to 30 dB at frequencies below 1000 Hz for all open-fit hearing aids except one, which also recommends the use of closed domes for all in-situ measurements, and (3) the mean interparticipant threshold difference fell within 5 dB for frequencies between 250 and 6000 Hz. The results clearly indicated that modern measured in-situ thresholds align (within 5 dB) with conventional thresholds measured, indicating the potential of in-situ audiometry for remote hearing care.
在助听器验配中使用现场测听法,与使用传统测听法和真耳测听法的标准方法相比,可减少对资源和设备的需求,因此很有吸引力。然而,由于之前的研究注意到使用传统测听法和现场测听法所测得的听阈之间存在差异,因此现场测听法的有效性一直备受争议。这种差异在开放式助听器上尤为明显,原因是通气孔造成了低频泄漏。在此,我们通过三项实验,对来自不同制造商的六款 "耳道式 "助听器进行了现场测听。在实验 I 中,测量了助听器的增益,以调查是否对规定的目标增益进行了修正。在实验二中,记录了原位刺激,以调查是否直接对提供的原位刺激进行了修正。最后,在实验 III 中,对佩戴开放式助听器的真实患者使用原位和传统测听法测量了听阈。结果表明:(1) 所有开放式助听器在使用原位或传统测听法测量时,助听器增益均不受影响;(2) 除一款开放式助听器外,所有开放式助听器在频率低于 1000 Hz 时,原位刺激均可调整达 30 dB;(3) 在频率介于 250 至 6000 Hz 之间时,参与者之间的平均阈值差异在 5 dB 以内。结果清楚地表明,现代原位测量的阈值与传统测量的阈值一致(在 5 分贝以内),这表明原位测听在远程听力保健方面具有潜力。
{"title":"In-situ Audiometry Compared to Conventional Audiometry for Hearing Aid Fitting.","authors":"Maaike Van Eeckhoutte, Bettina Skjold Jasper, Erik Finn Kjærbøl, David Harbo Jordell, Torsten Dau","doi":"10.1177/23312165241259704","DOIUrl":"10.1177/23312165241259704","url":null,"abstract":"<p><p>The use of in-situ audiometry for hearing aid fitting is appealing due to its reduced resource and equipment requirements compared to standard approaches employing conventional audiometry alongside real-ear measures. However, its validity has been a subject of debate, as previous studies noted differences between hearing thresholds measured using conventional and in-situ audiometry. The differences were particularly notable for open-fit hearing aids, attributed to low-frequency leakage caused by the vent. Here, in-situ audiometry was investigated for six receiver-in-canal hearing aids from different manufacturers through three experiments. In Experiment I, the hearing aid gain was measured to investigate whether corrections were implemented to the prescribed target gain. In Experiment II, the in-situ stimuli were recorded to investigate if corrections were directly incorporated to the delivered in-situ stimulus. Finally, in Experiment III, hearing thresholds using in-situ and conventional audiometry were measured with real patients wearing open-fit hearing aids. Results indicated that (1) the hearing aid gain remained unaffected when measured with in-situ or conventional audiometry for all open-fit measurements, (2) the in-situ stimuli were adjusted for up to 30 dB at frequencies below 1000 Hz for all open-fit hearing aids except one, which also recommends the use of closed domes for all in-situ measurements, and (3) the mean interparticipant threshold difference fell within 5 dB for frequencies between 250 and 6000 Hz. The results clearly indicated that modern measured in-situ thresholds align (within 5 dB) with conventional thresholds measured, indicating the potential of in-situ audiometry for remote hearing care.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"28 ","pages":"23312165241259704"},"PeriodicalIF":2.7,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11155351/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141248830","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In everyday acoustic environments, reverberation alters the speech signal received at the ears. Normal-hearing listeners are robust to these distortions, quickly recalibrating to achieve accurate speech perception. Over the past two decades, multiple studies have investigated the various adaptation mechanisms that listeners use to mitigate the negative impacts of reverberation and improve speech intelligibility. Following the PRISMA guidelines, we performed a systematic review of these studies, with the aim to summarize existing research, identify open questions, and propose future directions. Two researchers independently assessed a total of 661 studies, ultimately including 23 in the review. Our results showed that adaptation to reverberant speech is robust across diverse environments, experimental setups, speech units, and tasks, in noise-masked or unmasked conditions. The time course of adaptation is rapid, sometimes occurring in less than 1 s, but this can vary depending on the reverberation and noise levels of the acoustic environment. Adaptation is stronger in moderately reverberant rooms and minimal in rooms with very intense reverberation. While the mechanisms underlying the recalibration are largely unknown, adaptation to the direct-to-reverberant ratio-related changes in amplitude modulation appears to be the predominant candidate. However, additional factors need to be explored to provide a unified theory for the effect and its applications.
{"title":"Adaptation to Reverberation for Speech Perception: A Systematic Review.","authors":"Avgeris Tsironis, Eleni Vlahou, Panagiota Kontou, Pantelis Bagos, Norbert Kopčo","doi":"10.1177/23312165241273399","DOIUrl":"10.1177/23312165241273399","url":null,"abstract":"<p><p>In everyday acoustic environments, reverberation alters the speech signal received at the ears. Normal-hearing listeners are robust to these distortions, quickly recalibrating to achieve accurate speech perception. Over the past two decades, multiple studies have investigated the various adaptation mechanisms that listeners use to mitigate the negative impacts of reverberation and improve speech intelligibility. Following the PRISMA guidelines, we performed a systematic review of these studies, with the aim to summarize existing research, identify open questions, and propose future directions. Two researchers independently assessed a total of 661 studies, ultimately including 23 in the review. Our results showed that adaptation to reverberant speech is robust across diverse environments, experimental setups, speech units, and tasks, in noise-masked or unmasked conditions. The time course of adaptation is rapid, sometimes occurring in less than 1 s, but this can vary depending on the reverberation and noise levels of the acoustic environment. Adaptation is stronger in moderately reverberant rooms and minimal in rooms with very intense reverberation. While the mechanisms underlying the recalibration are largely unknown, adaptation to the direct-to-reverberant ratio-related changes in amplitude modulation appears to be the predominant candidate. However, additional factors need to be explored to provide a unified theory for the effect and its applications.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"28 ","pages":"23312165241273399"},"PeriodicalIF":2.6,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11384524/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142156433","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-01DOI: 10.1177/23312165241285695
Mathieu Lavandier, Lizette Heine, Fabien Perrin
When reproducing sounds over headphones, the simulated source can be externalized (i.e., perceived outside the head) or internalized (i.e., perceived within the head). Is it because it is perceived as more or less distant? To investigate this question, 18 participants evaluated distance and externalization for three types of sound (speech, piano, helicopter) in 27 conditions using nonindividualized stimuli. Distance and externalization ratings were significantly correlated across conditions and listeners, and when averaged across listeners or conditions. However, they were also decoupled in some circumstances: (1) Sound type had different effects on distance and externalization: the helicopter was evaluated as more distant, while speech was judged as less externalized. (2) Distance estimations increased with simulated distances even for stimuli judged as internalized. (3) Diotic reverberation influenced distance but not externalization. Overall, a source was not rated as externalized as soon as and only if its perceived distance exceeded a threshold (e.g., the head radius). These results suggest that distance and externalization are correlated but might not be aspects of a single perceptual continuum. In particular, a virtual source might be judged as both internalized and with a distance. Hence, it could be important to avoid using a scale related to distance when evaluating externalization.
{"title":"Comparing the Auditory Distance and Externalization of Virtual Sound Sources Simulated Using Nonindividualized Stimuli.","authors":"Mathieu Lavandier, Lizette Heine, Fabien Perrin","doi":"10.1177/23312165241285695","DOIUrl":"10.1177/23312165241285695","url":null,"abstract":"<p><p>When reproducing sounds over headphones, the simulated source can be externalized (i.e., perceived outside the head) or internalized (i.e., perceived within the head). Is it because it is perceived as more or less distant? To investigate this question, 18 participants evaluated distance and externalization for three types of sound (speech, piano, helicopter) in 27 conditions using nonindividualized stimuli. Distance and externalization ratings were significantly correlated across conditions and listeners, and when averaged across listeners or conditions. However, they were also decoupled in some circumstances: (1) Sound type had different effects on distance and externalization: the helicopter was evaluated as more distant, while speech was judged as less externalized. (2) Distance estimations increased with simulated distances even for stimuli judged as internalized. (3) Diotic reverberation influenced distance but not externalization. Overall, a source was not rated as externalized as soon as and only if its perceived distance exceeded a threshold (e.g., the head radius). These results suggest that distance and externalization are correlated but might not be aspects of a single perceptual continuum. In particular, a virtual source might be judged as both internalized and with a distance. Hence, it could be important to avoid using a scale related to distance when evaluating externalization.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"28 ","pages":"23312165241285695"},"PeriodicalIF":2.6,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11500226/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142478117","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-01DOI: 10.1177/23312165241293762
Robin Hake, Gunter Kreutz, Ulrike Frischen, Merle Schlender, Esther Rois-Merz, Markus Meis, Kirsten C Wagener, Kai Siedenburg
Hearing health, a cornerstone for musical performance and appreciation, often stands at odds with the unique acoustical challenges that musicians face. Utilizing a cross-sectional design, this survey-based study presents an in-depth examination of self-rated hearing health and its contributing factors in 370 professional and 401 amateur musicians recruited from German-speaking orchestras. To probe the nuanced differences between these groups, a balanced subsample of 200 professionals and 200 amateurs was curated, matched based on age, gender, and instrument family. The findings revealed that two-thirds of respondents reported hearing-related issues, prevalent in both professional and amateur musicians and affecting music-related activities as well as social interactions. The comparative analysis indicates that professionals experienced nearly four times more lifetime music noise exposure compared to amateurs and faced more hearing challenges in social contexts, but not in musical settings. Professionals exhibited greater awareness about hearing health and were more proactive in using hearing protection devices compared to their amateur counterparts. Notably, only 9% of professional musicians' playing hours and a mere 1% of amateurs' playing hours were fully protected. However, with respect to their attitudes toward hearing aids, professional musicians exhibited a noticeable aversion. In general, an increase in music-related problems (alongside hearing difficulties in daily life) was associated with a decrease in mental health-related quality of life. This research highlights the importance of proactive hearing health measures among both professional and amateur musicians and underscores the need for targeted interventions that address musicians' specific hearing health challenges and stigmatization concerns about hearing aids.
{"title":"A Survey on Hearing Health of Musicians in Professional and Amateur Orchestras.","authors":"Robin Hake, Gunter Kreutz, Ulrike Frischen, Merle Schlender, Esther Rois-Merz, Markus Meis, Kirsten C Wagener, Kai Siedenburg","doi":"10.1177/23312165241293762","DOIUrl":"https://doi.org/10.1177/23312165241293762","url":null,"abstract":"<p><p>Hearing health, a cornerstone for musical performance and appreciation, often stands at odds with the unique acoustical challenges that musicians face. Utilizing a cross-sectional design, this survey-based study presents an in-depth examination of self-rated hearing health and its contributing factors in 370 professional and 401 amateur musicians recruited from German-speaking orchestras. To probe the nuanced differences between these groups, a balanced subsample of 200 professionals and 200 amateurs was curated, matched based on age, gender, and instrument family. The findings revealed that two-thirds of respondents reported hearing-related issues, prevalent in both professional and amateur musicians and affecting music-related activities as well as social interactions. The comparative analysis indicates that professionals experienced nearly four times more lifetime music noise exposure compared to amateurs and faced more hearing challenges in social contexts, but not in musical settings. Professionals exhibited greater awareness about hearing health and were more proactive in using hearing protection devices compared to their amateur counterparts. Notably, only 9% of professional musicians' playing hours and a mere 1% of amateurs' playing hours were fully protected. However, with respect to their attitudes toward hearing aids, professional musicians exhibited a noticeable aversion. In general, an increase in music-related problems (alongside hearing difficulties in daily life) was associated with a decrease in mental health-related quality of life. This research highlights the importance of proactive hearing health measures among both professional and amateur musicians and underscores the need for targeted interventions that address musicians' specific hearing health challenges and stigmatization concerns about hearing aids.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"28 ","pages":"23312165241293762"},"PeriodicalIF":2.6,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142796554","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-01DOI: 10.1177/23312165241305049
Andres Camarena, Matthew Ardis, Takako Fujioka, Matthew B Fitzgerald, Raymond L Goldsworthy
Cochlear implant (CI) users often complain about music appreciation and speech recognition in background noise, which depend on segregating sound sources into perceptual streams. The present study examined relationships between frequency and fundamental frequency (F0) discrimination with stream segregation of tonal and speech streams for CI users and peers with no known hearing loss. Frequency and F0 discrimination were measured for 1,000 Hz pure tones and 110 Hz complex tones, respectively. Stream segregation was measured for pure and complex tones using a lead/lag delay detection task. Spondee word identification was measured in competing speech with high levels of informational masking that required listeners to use F0 to segregate speech. The hypotheses were that frequency and F0 discrimination would explain a significant portion of the variance in outcomes for tonal segregation and speech reception. On average, CI users received a large benefit for stream segregation of tonal streams when either the frequency or F0 of the competing stream was shifted relative to the target stream. A linear relationship accounted for 42% of the covariance between measures of stream segregation and complex tone discrimination for CI users. In contrast, such benefits were absent when the F0 of the competing speech was shifted relative to the target speech. The large benefit observed for tonal streams is promising for music listening if it transfers to separating instruments within a song; however, the lack of benefit for speech suggests separate mechanisms, or special requirements, for speech processing.
{"title":"The Relationship of Pitch Discrimination with Segregation of Tonal and Speech Streams for Cochlear Implant Users.","authors":"Andres Camarena, Matthew Ardis, Takako Fujioka, Matthew B Fitzgerald, Raymond L Goldsworthy","doi":"10.1177/23312165241305049","DOIUrl":"10.1177/23312165241305049","url":null,"abstract":"<p><p>Cochlear implant (CI) users often complain about music appreciation and speech recognition in background noise, which depend on segregating sound sources into perceptual streams. The present study examined relationships between frequency and fundamental frequency (F0) discrimination with stream segregation of tonal and speech streams for CI users and peers with no known hearing loss. Frequency and F0 discrimination were measured for 1,000 Hz pure tones and 110 Hz complex tones, respectively. Stream segregation was measured for pure and complex tones using a lead/lag delay detection task. Spondee word identification was measured in competing speech with high levels of informational masking that required listeners to use F0 to segregate speech. The hypotheses were that frequency and F0 discrimination would explain a significant portion of the variance in outcomes for tonal segregation and speech reception. On average, CI users received a large benefit for stream segregation of tonal streams when either the frequency or F0 of the competing stream was shifted relative to the target stream. A linear relationship accounted for 42% of the covariance between measures of stream segregation and complex tone discrimination for CI users. In contrast, such benefits were absent when the F0 of the competing speech was shifted relative to the target speech. The large benefit observed for tonal streams is promising for music listening if it transfers to separating instruments within a song; however, the lack of benefit for speech suggests separate mechanisms, or special requirements, for speech processing.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"28 ","pages":"23312165241305049"},"PeriodicalIF":2.6,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11639003/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142819743","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-01DOI: 10.1177/23312165241227818
Jiayue Liu, Joshua Stohl, Enrique A Lopez-Poveda, Tobias Overath
The past decade has seen a wealth of research dedicated to determining which and how morphological changes in the auditory periphery contribute to people experiencing hearing difficulties in noise despite having clinically normal audiometric thresholds in quiet. Evidence from animal studies suggests that cochlear synaptopathy in the inner ear might lead to auditory nerve deafferentation, resulting in impoverished signal transmission to the brain. Here, we quantify the likely perceptual consequences of auditory deafferentation in humans via a physiologically inspired encoding-decoding model. The encoding stage simulates the processing of an acoustic input stimulus (e.g., speech) at the auditory periphery, while the decoding stage is trained to optimally regenerate the input stimulus from the simulated auditory nerve firing data. This allowed us to quantify the effect of different degrees of auditory deafferentation by measuring the extent to which the decoded signal supported the identification of speech in quiet and in noise. In a series of experiments, speech perception thresholds in quiet and in noise increased (worsened) significantly as a function of the degree of auditory deafferentation for modeled deafferentation greater than 90%. Importantly, this effect was significantly stronger in a noisy than in a quiet background. The encoding-decoding model thus captured the hallmark symptom of degraded speech perception in noise together with normal speech perception in quiet. As such, the model might function as a quantitative guide to evaluating the degree of auditory deafferentation in human listeners.
{"title":"Quantifying the Impact of Auditory Deafferentation on Speech Perception.","authors":"Jiayue Liu, Joshua Stohl, Enrique A Lopez-Poveda, Tobias Overath","doi":"10.1177/23312165241227818","DOIUrl":"10.1177/23312165241227818","url":null,"abstract":"<p><p>The past decade has seen a wealth of research dedicated to determining which and how morphological changes in the auditory periphery contribute to people experiencing hearing difficulties in noise despite having clinically normal audiometric thresholds in quiet. Evidence from animal studies suggests that cochlear synaptopathy in the inner ear might lead to auditory nerve deafferentation, resulting in impoverished signal transmission to the brain. Here, we quantify the likely perceptual consequences of auditory deafferentation in humans via a physiologically inspired encoding-decoding model. The encoding stage simulates the processing of an acoustic input stimulus (e.g., speech) at the auditory periphery, while the decoding stage is trained to optimally regenerate the input stimulus from the simulated auditory nerve firing data. This allowed us to quantify the effect of different degrees of auditory deafferentation by measuring the extent to which the decoded signal supported the identification of speech in quiet and in noise. In a series of experiments, speech perception thresholds in quiet and in noise increased (worsened) significantly as a function of the degree of auditory deafferentation for modeled deafferentation greater than 90%. Importantly, this effect was significantly stronger in a noisy than in a quiet background. The encoding-decoding model thus captured the hallmark symptom of degraded speech perception in noise together with normal speech perception in quiet. As such, the model might function as a quantitative guide to evaluating the degree of auditory deafferentation in human listeners.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"28 ","pages":"23312165241227818"},"PeriodicalIF":2.7,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10832414/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139643190","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-01DOI: 10.1177/23312165241248973
Sabine Haumann, Max E Timm, Andreas Büchner, Thomas Lenarz, Rolf B Salcher
To preserve residual hearing during cochlear implant (CI) surgery it is desirable to use intraoperative monitoring of inner ear function (cochlear monitoring). A promising method is electrocochleography (ECochG). Within this project the relations between intracochlear ECochG recordings, position of the recording contact in the cochlea with respect to anatomy and frequency and preservation of residual hearing were investigated. The aim was to better understand the changes in ECochG signals and whether these are due to the electrode position in the cochlea or to trauma generated during insertion. During and after insertion of hearing preservation electrodes, intraoperative ECochG recordings were performed using the CI electrode (MED-EL). During insertion, the recordings were performed at discrete insertion steps on electrode contact 1. After insertion as well as postoperatively the recordings were performed at different electrode contacts. The electrode location in the cochlea during insertion was estimated by mathematical models using preoperative clinical imaging, the postoperative location was measured using postoperative clinical imaging. The recordings were analyzed from six adult CI recipients. In the four patients with good residual hearing in the low frequencies the signal amplitude rose with largest amplitudes being recorded closest to the generators of the stimulation frequency, while in both cases with severe pantonal hearing losses the amplitude initially rose and then dropped. This might be due to various reasons as discussed in the following. Our results indicate that this approach can provide valuable information for the interpretation of intracochlearly recorded ECochG signals.
为了在人工耳蜗植入(CI)手术中保留残余听力,最好在术中对内耳功能进行监测(耳蜗监测)。一种很有前景的方法就是耳蜗内电子耳蜗图(ECochG)。在该项目中,研究了耳蜗内 ECochG 记录、耳蜗内记录触点的解剖位置和频率与保留残余听力之间的关系。目的是更好地了解心电图信号的变化,以及这些变化是由耳蜗中的电极位置还是插入过程中产生的创伤引起的。在插入听力保护电极期间和之后,使用 CI 电极(MED-EL)进行术中心电图记录。在插入过程中,记录是在电极接触点 1 的不连续插入步骤上进行的。插入后和术后在不同的电极接触点进行记录。插入时电极在耳蜗中的位置是通过术前临床成像的数学模型估算出来的,术后位置则是通过术后临床成像测量出来的。对六名成年人工耳蜗植入者的记录进行了分析。在低频残余听力良好的四名患者中,信号振幅上升,最大振幅记录在最接近刺激频率发生器的位置,而在严重泛音听力损失的两名患者中,振幅最初上升,然后下降。这可能是由于下文讨论的各种原因造成的。我们的研究结果表明,这种方法可以为解读耳内记录的心电图信号提供有价值的信息。
{"title":"Intracochlear Recording of Electrocochleography During and After Cochlear Implant Insertion Dependent on the Location in the Cochlea.","authors":"Sabine Haumann, Max E Timm, Andreas Büchner, Thomas Lenarz, Rolf B Salcher","doi":"10.1177/23312165241248973","DOIUrl":"10.1177/23312165241248973","url":null,"abstract":"<p><p>To preserve residual hearing during cochlear implant (CI) surgery it is desirable to use intraoperative monitoring of inner ear function (cochlear monitoring). A promising method is electrocochleography (ECochG). Within this project the relations between intracochlear ECochG recordings, position of the recording contact in the cochlea with respect to anatomy and frequency and preservation of residual hearing were investigated. The aim was to better understand the changes in ECochG signals and whether these are due to the electrode position in the cochlea or to trauma generated during insertion. During and after insertion of hearing preservation electrodes, intraoperative ECochG recordings were performed using the CI electrode (MED-EL). During insertion, the recordings were performed at discrete insertion steps on electrode contact 1. After insertion as well as postoperatively the recordings were performed at different electrode contacts. The electrode location in the cochlea during insertion was estimated by mathematical models using preoperative clinical imaging, the postoperative location was measured using postoperative clinical imaging. The recordings were analyzed from six adult CI recipients. In the four patients with good residual hearing in the low frequencies the signal amplitude rose with largest amplitudes being recorded closest to the generators of the stimulation frequency, while in both cases with severe pantonal hearing losses the amplitude initially rose and then dropped. This might be due to various reasons as discussed in the following. Our results indicate that this approach can provide valuable information for the interpretation of intracochlearly recorded ECochG signals.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"28 ","pages":"23312165241248973"},"PeriodicalIF":2.7,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11080744/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140877694","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-01DOI: 10.1177/23312165241246596
Florine L Bachmann, Joshua P Kulasingham, Kasper Eskelund, Martin Enqvist, Emina Alickovic, Hamish Innes-Brown
The auditory brainstem response (ABR) is a valuable clinical tool for objective hearing assessment, which is conventionally detected by averaging neural responses to thousands of short stimuli. Progressing beyond these unnatural stimuli, brainstem responses to continuous speech presented via earphones have been recently detected using linear temporal response functions (TRFs). Here, we extend earlier studies by measuring subcortical responses to continuous speech presented in the sound-field, and assess the amount of data needed to estimate brainstem TRFs. Electroencephalography (EEG) was recorded from 24 normal hearing participants while they listened to clicks and stories presented via earphones and loudspeakers. Subcortical TRFs were computed after accounting for non-linear processing in the auditory periphery by either stimulus rectification or an auditory nerve model. Our results demonstrated that subcortical responses to continuous speech could be reliably measured in the sound-field. TRFs estimated using auditory nerve models outperformed simple rectification, and 16 minutes of data was sufficient for the TRFs of all participants to show clear wave V peaks for both earphones and sound-field stimuli. Subcortical TRFs to continuous speech were highly consistent in both earphone and sound-field conditions, and with click ABRs. However, sound-field TRFs required slightly more data (16 minutes) to achieve clear wave V peaks compared to earphone TRFs (12 minutes), possibly due to effects of room acoustics. By investigating subcortical responses to sound-field speech stimuli, this study lays the groundwork for bringing objective hearing assessment closer to real-life conditions, which may lead to improved hearing evaluations and smart hearing technologies.
听觉脑干反应(ABR)是一种用于客观听力评估的重要临床工具,传统的检测方法是将神经对数千个短刺激的反应平均化。除了这些非自然刺激之外,最近还利用线性时间反应函数(TRF)检测了脑干对耳机播放的连续语音的反应。在这里,我们通过测量皮层下对声场中连续语音的反应来扩展之前的研究,并评估估计脑干 TRF 所需的数据量。我们记录了 24 名听力正常的参与者在聆听通过耳机和扬声器播放的咔嗒声和故事时的脑电图(EEG)。在通过刺激整流或听觉神经模型对听觉外围进行非线性处理后,计算了皮层下 TRF。我们的研究结果表明,在声场中可以可靠地测量皮层下对连续语音的反应。使用听觉神经模型估算的TRF优于简单的整流,16分钟的数据足以让所有参与者的TRF在耳机和声场刺激下都显示出清晰的V波峰值。在耳机和声场条件下,皮层下连续语音 TRF 与点击 ABR 高度一致。然而,与耳机 TRF(12 分钟)相比,声场 TRF 需要稍多的数据(16 分钟)才能达到清晰的波 V 峰值,这可能是由于室内声学的影响。通过研究皮层下对声场言语刺激的反应,本研究为使客观听力评估更接近真实生活条件奠定了基础,这可能会改进听力评估和智能听力技术。
{"title":"Extending Subcortical EEG Responses to Continuous Speech to the Sound-Field.","authors":"Florine L Bachmann, Joshua P Kulasingham, Kasper Eskelund, Martin Enqvist, Emina Alickovic, Hamish Innes-Brown","doi":"10.1177/23312165241246596","DOIUrl":"10.1177/23312165241246596","url":null,"abstract":"<p><p>The auditory brainstem response (ABR) is a valuable clinical tool for objective hearing assessment, which is conventionally detected by averaging neural responses to thousands of short stimuli. Progressing beyond these unnatural stimuli, brainstem responses to continuous speech presented via earphones have been recently detected using linear temporal response functions (TRFs). Here, we extend earlier studies by measuring subcortical responses to continuous speech presented in the sound-field, and assess the amount of data needed to estimate brainstem TRFs. Electroencephalography (EEG) was recorded from 24 normal hearing participants while they listened to clicks and stories presented via earphones and loudspeakers. Subcortical TRFs were computed after accounting for non-linear processing in the auditory periphery by either stimulus rectification or an auditory nerve model. Our results demonstrated that subcortical responses to continuous speech could be reliably measured in the sound-field. TRFs estimated using auditory nerve models outperformed simple rectification, and 16 minutes of data was sufficient for the TRFs of all participants to show clear wave V peaks for both earphones and sound-field stimuli. Subcortical TRFs to continuous speech were highly consistent in both earphone and sound-field conditions, and with click ABRs. However, sound-field TRFs required slightly more data (16 minutes) to achieve clear wave V peaks compared to earphone TRFs (12 minutes), possibly due to effects of room acoustics. By investigating subcortical responses to sound-field speech stimuli, this study lays the groundwork for bringing objective hearing assessment closer to real-life conditions, which may lead to improved hearing evaluations and smart hearing technologies.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"28 ","pages":"23312165241246596"},"PeriodicalIF":2.7,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11092544/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140913235","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-01DOI: 10.1177/23312165241235463
Melissa Ramírez, Johannes M Arend, Petra von Gablenz, Heinrich R Liesefeld, Christoph Pörschmann
Sound localization testing is key for comprehensive hearing evaluations, particularly in cases of suspected auditory processing disorders. However, sound localization is not commonly assessed in clinical practice, likely due to the complexity and size of conventional measurement systems, which require semicircular loudspeaker arrays in large and acoustically treated rooms. To address this issue, we investigated the feasibility of testing sound localization in virtual reality (VR). Previous research has shown that virtualization can lead to an increase in localization blur. To measure these effects, we conducted a study with a group of normal-hearing adults, comparing sound localization performance in different augmented reality and VR scenarios. We started with a conventional loudspeaker-based measurement setup and gradually moved to a virtual audiovisual environment, testing sound localization in each scenario using a within-participant design. The loudspeaker-based experiment yielded results comparable to those reported in the literature, and the results of the virtual localization test provided new insights into localization performance in state-of-the-art VR environments. By comparing localization performance between the loudspeaker-based and virtual conditions, we were able to estimate the increase in localization blur induced by virtualization relative to a conventional test setup. Notably, our study provides the first proxy normative cutoff values for sound localization testing in VR. As an outlook, we discuss the potential of a VR-based sound localization test as a suitable, accessible, and portable alternative to conventional setups and how it could serve as a time- and resource-saving prescreening tool to avoid unnecessarily extensive and complex laboratory testing.
{"title":"Toward Sound Localization Testing in Virtual Reality to Aid in the Screening of Auditory Processing Disorders.","authors":"Melissa Ramírez, Johannes M Arend, Petra von Gablenz, Heinrich R Liesefeld, Christoph Pörschmann","doi":"10.1177/23312165241235463","DOIUrl":"10.1177/23312165241235463","url":null,"abstract":"<p><p>Sound localization testing is key for comprehensive hearing evaluations, particularly in cases of suspected auditory processing disorders. However, sound localization is not commonly assessed in clinical practice, likely due to the complexity and size of conventional measurement systems, which require semicircular loudspeaker arrays in large and acoustically treated rooms. To address this issue, we investigated the feasibility of testing sound localization in virtual reality (VR). Previous research has shown that virtualization can lead to an increase in localization blur. To measure these effects, we conducted a study with a group of normal-hearing adults, comparing sound localization performance in different augmented reality and VR scenarios. We started with a conventional loudspeaker-based measurement setup and gradually moved to a virtual audiovisual environment, testing sound localization in each scenario using a within-participant design. The loudspeaker-based experiment yielded results comparable to those reported in the literature, and the results of the virtual localization test provided new insights into localization performance in state-of-the-art VR environments. By comparing localization performance between the loudspeaker-based and virtual conditions, we were able to estimate the increase in localization blur induced by virtualization relative to a conventional test setup. Notably, our study provides the first proxy normative cutoff values for sound localization testing in VR. As an outlook, we discuss the potential of a VR-based sound localization test as a suitable, accessible, and portable alternative to conventional setups and how it could serve as a time- and resource-saving prescreening tool to avoid unnecessarily extensive and complex laboratory testing.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"28 ","pages":"23312165241235463"},"PeriodicalIF":2.7,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10908240/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139997946","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}