Pub Date : 2024-01-01DOI: 10.1177/23312165241261490
Saskia Ibelings, Thomas Brand, Esther Ruigendijk, Inga Holube
Speech-recognition tests are widely used in both clinical and research audiology. The purpose of this study was the development of a novel speech-recognition test that combines concepts of different speech-recognition tests to reduce training effects and allows for a large set of speech material. The new test consists of four different words per trial in a meaningful construct with a fixed structure, the so-called phrases. Various free databases were used to select the words and to determine their frequency. Highly frequent nouns were grouped into thematic categories and combined with related adjectives and infinitives. After discarding inappropriate and unnatural combinations, and eliminating duplications of (sub-)phrases, a total number of 772 phrases remained. Subsequently, the phrases were synthesized using a text-to-speech system. The synthesis significantly reduces the effort compared to recordings with a real speaker. After excluding outliers, measured speech-recognition scores for the phrases with 31 normal-hearing participants at fixed signal-to-noise ratios (SNR) revealed speech-recognition thresholds (SRT) for each phrase varying up to 4 dB. The median SRT was -9.1 dB SNR and thus comparable to existing sentence tests. The psychometric function's slope of 15 percentage points per dB is also comparable and enables efficient use in audiology. Summarizing, the principle of creating speech material in a modular system has many potential applications.
{"title":"Development of a Phrase-Based Speech-Recognition Test Using Synthetic Speech.","authors":"Saskia Ibelings, Thomas Brand, Esther Ruigendijk, Inga Holube","doi":"10.1177/23312165241261490","DOIUrl":"10.1177/23312165241261490","url":null,"abstract":"<p><p>Speech-recognition tests are widely used in both clinical and research audiology. The purpose of this study was the development of a novel speech-recognition test that combines concepts of different speech-recognition tests to reduce training effects and allows for a large set of speech material. The new test consists of four different words per trial in a meaningful construct with a fixed structure, the so-called phrases. Various free databases were used to select the words and to determine their frequency. Highly frequent nouns were grouped into thematic categories and combined with related adjectives and infinitives. After discarding inappropriate and unnatural combinations, and eliminating duplications of (sub-)phrases, a total number of 772 phrases remained. Subsequently, the phrases were synthesized using a text-to-speech system. The synthesis significantly reduces the effort compared to recordings with a real speaker. After excluding outliers, measured speech-recognition scores for the phrases with 31 normal-hearing participants at fixed signal-to-noise ratios (SNR) revealed speech-recognition thresholds (SRT) for each phrase varying up to 4 dB. The median SRT was -9.1 dB SNR and thus comparable to existing sentence tests. The psychometric function's slope of 15 percentage points per dB is also comparable and enables efficient use in audiology. Summarizing, the principle of creating speech material in a modular system has many potential applications.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"28 ","pages":"23312165241261490"},"PeriodicalIF":2.6,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11273571/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141761864","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-01DOI: 10.1177/23312165241260621
Mira Van Wilderode, Nathan Van Humbeeck, Ralf Krampe, Astrid van Wieringen
While listening, we commonly participate in simultaneous activities. For instance, at receptions people often stand while engaging in conversation. It is known that listening and postural control are associated with each other. Previous studies focused on the interplay of listening and postural control when the speech identification task had rather high cognitive control demands. This study aimed to determine whether listening and postural control interact when the speech identification task requires minimal cognitive control, i.e., when words are presented without background noise, or a large memory load. This study included 22 young adults, 27 middle-aged adults, and 21 older adults. Participants performed a speech identification task (auditory single task), a postural control task (posture single task) and combined postural control and speech identification tasks (dual task) to assess the effects of multitasking. The difficulty levels of the listening and postural control tasks were manipulated by altering the level of the words (25 or 30 dB SPL) and the mobility of the platform (stable or moving). The sound level was increased for adults with a hearing impairment. In the dual-task, listening performance decreased, especially for middle-aged and older adults, while postural control improved. These results suggest that even when cognitive control demands for listening are minimal, interaction with postural control occurs. Correlational analysis revealed that hearing loss was a better predictor than age of speech identification and postural control.
在聆听时,我们通常会同时参与一些活动。例如,在招待会上,人们常常一边站着一边交谈。众所周知,听力和姿势控制是相互关联的。以前的研究主要集中在语音识别任务对认知控制要求较高时,听力和姿势控制的相互作用。本研究旨在确定当语音识别任务对认知控制要求最低时,即单词出现时没有背景噪音,或记忆负荷较大时,听力和姿势控制是否会相互作用。这项研究包括 22 名年轻人、27 名中年人和 21 名老年人。受试者分别完成了语音识别任务(听觉单一任务)、姿势控制任务(姿势单一任务)以及姿势控制和语音识别联合任务(双重任务),以评估多任务的影响。听力和姿势控制任务的难度是通过改变词语的音量(25 或 30 dB SPL)和平台的移动性(稳定或移动)来控制的。对于有听力障碍的成年人,声级会提高。在双重任务中,听力表现下降,尤其是中老年人,而姿势控制能力则有所提高。这些结果表明,即使对听力的认知控制要求很低,也会与姿势控制发生相互作用。相关分析表明,听力损失比年龄更能预测语言识别能力和姿势控制能力。
{"title":"Speech-Identification During Standing as a Multitasking Challenge for Young, Middle-Aged and Older Adults.","authors":"Mira Van Wilderode, Nathan Van Humbeeck, Ralf Krampe, Astrid van Wieringen","doi":"10.1177/23312165241260621","DOIUrl":"10.1177/23312165241260621","url":null,"abstract":"<p><p>While listening, we commonly participate in simultaneous activities. For instance, at receptions people often stand while engaging in conversation. It is known that listening and postural control are associated with each other. Previous studies focused on the interplay of listening and postural control when the speech identification task had rather high cognitive control demands. This study aimed to determine whether listening and postural control interact when the speech identification task requires minimal cognitive control, i.e., when words are presented without background noise, or a large memory load. This study included 22 young adults, 27 middle-aged adults, and 21 older adults. Participants performed a speech identification task (auditory single task), a postural control task (posture single task) and combined postural control and speech identification tasks (dual task) to assess the effects of multitasking. The difficulty levels of the listening and postural control tasks were manipulated by altering the level of the words (25 or 30 dB SPL) and the mobility of the platform (stable or moving). The sound level was increased for adults with a hearing impairment. In the dual-task, listening performance decreased, especially for middle-aged and older adults, while postural control improved. These results suggest that even when cognitive control demands for listening are minimal, interaction with postural control occurs. Correlational analysis revealed that hearing loss was a better predictor than age of speech identification and postural control.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"28 ","pages":"23312165241260621"},"PeriodicalIF":2.6,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11282555/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141761866","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-01DOI: 10.1177/23312165241273342
Larry E Humes, Sumitrajit Dhar, Vinaya Manchaiah, Anu Sharma, Theresa H Chisolm, Michelle L Arnold, Victoria A Sanchez
During the last decade, there has been a move towards consumer-centric hearing healthcare. This is a direct result of technological advancements (e.g., merger of consumer grade hearing aids with consumer grade earphones creating a wide range of hearing devices) as well as policy changes (e.g., the U.S. Food and Drug Administration creating a new over-the-counter [OTC] hearing aid category). In addition to various direct-to-consumer (DTC) hearing devices available on the market, there are also several validated tools for the self-assessment of auditory function and the detection of ear disease, as well as tools for education about hearing loss, hearing devices, and communication strategies. Further, all can be made easily available to a wide range of people. This perspective provides a framework and identifies tools to improve and maintain optimal auditory wellness across the adult life course. A broadly available and accessible set of tools that can be made available on a digital platform to aid adults in the assessment and as needed, the improvement, of auditory wellness is discussed.
{"title":"A Perspective on Auditory Wellness: What It Is, Why It Is Important, and How It Can Be Managed.","authors":"Larry E Humes, Sumitrajit Dhar, Vinaya Manchaiah, Anu Sharma, Theresa H Chisolm, Michelle L Arnold, Victoria A Sanchez","doi":"10.1177/23312165241273342","DOIUrl":"10.1177/23312165241273342","url":null,"abstract":"<p><p>During the last decade, there has been a move towards consumer-centric hearing healthcare. This is a direct result of technological advancements (e.g., merger of consumer grade hearing aids with consumer grade earphones creating a wide range of hearing devices) as well as policy changes (e.g., the U.S. Food and Drug Administration creating a new over-the-counter [OTC] hearing aid category). In addition to various direct-to-consumer (DTC) hearing devices available on the market, there are also several validated tools for the self-assessment of auditory function and the detection of ear disease, as well as tools for education about hearing loss, hearing devices, and communication strategies. Further, all can be made easily available to a wide range of people. This <i>perspective</i> provides a framework and identifies tools to improve and maintain optimal auditory wellness across the adult life course. A broadly available and accessible set of tools that can be made available on a digital platform to aid adults in the assessment and as needed, the improvement, of auditory wellness is discussed.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"28 ","pages":"23312165241273342"},"PeriodicalIF":2.6,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11329910/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141989242","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-01DOI: 10.1177/23312165241299778
Mats Exter, Theresa Jansen, Laura Hartog, Dirk Oetting
Loudness is a fundamental dimension of auditory perception. When hearing impairment results in a loudness deficit, hearing aids are typically prescribed to compensate for this. However, the relationship between an individual's specific hearing impairment and the hearing aid fitting strategy used to address it is usually not straightforward. Various iterations of fine-tuning and troubleshooting by the hearing care professional are required, based largely on experience and the introspective feedback from the hearing aid user. We present the development of a new method for validating an individual's loudness perception of natural signals relative to a normal-hearing reference. It is a measurement method specifically designed for the situation typically encountered by hearing care professionals, namely, with hearing-impaired individuals in the free field with their hearing aids in place. In combination with the qualitative user feedback that the measurement is fast and that its results are intuitively displayed and easily interpretable, the method fills a gap between existing tools and is well suited to provide concrete guidance and orientation to the hearing care professional in the process of individual gain adjustment.
{"title":"Development and Evaluation of a Loudness Validation Method With Natural Signals for Hearing Aid Fitting.","authors":"Mats Exter, Theresa Jansen, Laura Hartog, Dirk Oetting","doi":"10.1177/23312165241299778","DOIUrl":"10.1177/23312165241299778","url":null,"abstract":"<p><p>Loudness is a fundamental dimension of auditory perception. When hearing impairment results in a loudness deficit, hearing aids are typically prescribed to compensate for this. However, the relationship between an individual's specific hearing impairment and the hearing aid fitting strategy used to address it is usually not straightforward. Various iterations of fine-tuning and troubleshooting by the hearing care professional are required, based largely on experience and the introspective feedback from the hearing aid user. We present the development of a new method for validating an individual's loudness perception of natural signals relative to a normal-hearing reference. It is a measurement method specifically designed for the situation typically encountered by hearing care professionals, namely, with hearing-impaired individuals in the free field with their hearing aids in place. In combination with the qualitative user feedback that the measurement is fast and that its results are intuitively displayed and easily interpretable, the method fills a gap between existing tools and is well suited to provide concrete guidance and orientation to the hearing care professional in the process of individual gain adjustment.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"28 ","pages":"23312165241299778"},"PeriodicalIF":2.6,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11788813/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142781551","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-01DOI: 10.1177/23312165241271340
Martin J Lindenbeck, Piotr Majdak, Bernhard Laback
Timing cues such as interaural time differences (ITDs) and temporal pitch are pivotal for sound localization and source segregation, but their perception is degraded in cochlear-implant (CI) listeners as compared to normal-hearing listeners. In multi-electrode stimulation, intra-aural channel interactions between electrodes are assumed to be an important factor limiting access to those cues. The monaural asynchrony of stimulation timing across electrodes is assumed to mediate the amount of these interactions. This study investigated the effect of the monaural temporal electrode asynchrony (mTEA) between two electrodes, applied similarly in both ears, on ITD-based left/right discrimination sensitivity in five CI listeners, using pulse trains with 100 pulses per second and per electrode. Forward-masked spatial tuning curves were measured at both ears to find electrode separations evoking controlled degrees of across-electrode masking. For electrode separations smaller than 3 mm, results showed an effect of mTEA. Patterns were u/v-shaped, consistent with an explanation in terms of the effective pulse rate that appears to be subject to the well-known rate limitation in electric hearing. For separations larger than 7 mm, no mTEA effects were observed. A comparison to monaural rate-pitch discrimination in a separate set of listeners and in a matched setup showed no systematic differences between percepts. Overall, an important role of the mTEA in both binaural and monaural dual-electrode stimulation is consistent with a monaural pulse-rate limitation whose effect is mediated by channel interactions. Future CI stimulation strategies aiming at improved timing-cue encoding should minimize the stimulation delay between nearby electrodes that need to be stimulated successively.
{"title":"Effects of Monaural Temporal Electrode Asynchrony and Channel Interactions in Bilateral and Unilateral Cochlear-Implant Stimulation.","authors":"Martin J Lindenbeck, Piotr Majdak, Bernhard Laback","doi":"10.1177/23312165241271340","DOIUrl":"10.1177/23312165241271340","url":null,"abstract":"<p><p>Timing cues such as interaural time differences (ITDs) and temporal pitch are pivotal for sound localization and source segregation, but their perception is degraded in cochlear-implant (CI) listeners as compared to normal-hearing listeners. In multi-electrode stimulation, intra-aural channel interactions between electrodes are assumed to be an important factor limiting access to those cues. The monaural asynchrony of stimulation timing across electrodes is assumed to mediate the amount of these interactions. This study investigated the effect of the monaural temporal electrode asynchrony (mTEA) between two electrodes, applied similarly in both ears, on ITD-based left/right discrimination sensitivity in five CI listeners, using pulse trains with 100 pulses per second and per electrode. Forward-masked spatial tuning curves were measured at both ears to find electrode separations evoking controlled degrees of across-electrode masking. For electrode separations smaller than 3 mm, results showed an effect of mTEA. Patterns were u/v-shaped, consistent with an explanation in terms of the effective pulse rate that appears to be subject to the well-known rate limitation in electric hearing. For separations larger than 7 mm, no mTEA effects were observed. A comparison to monaural rate-pitch discrimination in a separate set of listeners and in a matched setup showed no systematic differences between percepts. Overall, an important role of the mTEA in both binaural and monaural dual-electrode stimulation is consistent with a monaural pulse-rate limitation whose effect is mediated by channel interactions. Future CI stimulation strategies aiming at improved timing-cue encoding should minimize the stimulation delay between nearby electrodes that need to be stimulated successively.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"28 ","pages":"23312165241271340"},"PeriodicalIF":2.6,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11382250/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142113726","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-01DOI: 10.1177/23312165241275895
Yael Zaltz
Auditory training can lead to notable enhancements in specific tasks, but whether these improvements generalize to untrained tasks like speech-in-noise (SIN) recognition remains uncertain. This study examined how training conditions affect generalization. Fifty-five young adults were divided into "Trained-in-Quiet" (n = 15), "Trained-in-Noise" (n = 20), and "Control" (n = 20) groups. Participants completed two sessions. The first session involved an assessment of SIN recognition and voice discrimination (VD) with word or sentence stimuli, employing combined fundamental frequency (F0) + formant frequencies voice cues. Subsequently, only the trained groups proceeded to an interleaved training phase, encompassing six VD blocks with sentence stimuli, utilizing either F0-only or formant-only cues. The second session replicated the interleaved training for the trained groups, followed by a second assessment conducted by all three groups, identical to the first session. Results showed significant improvements in the trained task regardless of training conditions. However, VD training with a single cue did not enhance VD with both cues beyond control group improvements, suggesting limited generalization. Notably, the Trained-in-Noise group exhibited the most significant SIN recognition improvements posttraining, implying generalization across tasks that share similar acoustic conditions. Overall, findings suggest training conditions impact generalization by influencing processing levels associated with the trained task. Training in noisy conditions may prompt higher auditory and/or cognitive processing than training in quiet, potentially extending skills to tasks involving challenging listening conditions, such as SIN recognition. These insights hold significant theoretical and clinical implications, potentially advancing the development of effective auditory training protocols.
听觉训练能显著提高特定任务的能力,但这些能力是否能推广到噪声语音识别(SIN)等未经训练的任务中,目前仍不确定。本研究考察了训练条件对泛化的影响。55 名年轻人被分为 "安静训练 "组(15 人)、"噪声训练 "组(20 人)和 "对照 "组(20 人)。参与者完成两个环节。第一个环节是评估单词或句子刺激下的 SIN 识别能力和语音辨别能力(VD),采用基频 (F0) + 共振频率相结合的语音提示。随后,只有接受过训练的小组才进入交错训练阶段,包括六个句子刺激的 VD 块,使用纯 F0 或纯声母线索。第二阶段重复了受训组的交错训练,然后由所有三组进行第二次评估,评估内容与第一阶段相同。结果表明,无论训练条件如何,受训任务都有明显改善。然而,使用单一线索的 VD 训练并没有在对照组的基础上提高使用两个线索的 VD,这表明其普遍性有限。值得注意的是,"噪音训练 "组在训练后的 SIN 识别能力有了最显著的提高,这意味着在具有相似声学条件的任务中也具有普遍性。总之,研究结果表明,训练条件通过影响与训练任务相关的处理水平来影响泛化。与安静环境下的训练相比,嘈杂环境下的训练可能会促进更高的听觉和/或认知处理能力,从而有可能将技能扩展到具有挑战性听力条件的任务中,如 SIN 识别。这些见解具有重要的理论和临床意义,有可能推动有效听觉训练方案的开发。
{"title":"The Impact of Trained Conditions on the Generalization of Learning Gains Following Voice Discrimination Training.","authors":"Yael Zaltz","doi":"10.1177/23312165241275895","DOIUrl":"10.1177/23312165241275895","url":null,"abstract":"<p><p>Auditory training can lead to notable enhancements in specific tasks, but whether these improvements generalize to untrained tasks like speech-in-noise (SIN) recognition remains uncertain. This study examined how training conditions affect generalization. Fifty-five young adults were divided into \"Trained-in-Quiet\" (<i>n</i> = 15), \"Trained-in-Noise\" (<i>n</i> = 20), and \"Control\" (<i>n</i> = 20) groups. Participants completed two sessions. The first session involved an assessment of SIN recognition and voice discrimination (VD) with word or sentence stimuli, employing combined fundamental frequency (F0) + formant frequencies voice cues. Subsequently, only the trained groups proceeded to an interleaved training phase, encompassing six VD blocks with sentence stimuli, utilizing either F0-only or formant-only cues. The second session replicated the interleaved training for the trained groups, followed by a second assessment conducted by all three groups, identical to the first session. Results showed significant improvements in the trained task regardless of training conditions. However, VD training with a single cue did not enhance VD with both cues beyond control group improvements, suggesting limited generalization. Notably, the Trained-in-Noise group exhibited the most significant SIN recognition improvements posttraining, implying generalization across tasks that share similar acoustic conditions. Overall, findings suggest training conditions impact generalization by influencing processing levels associated with the trained task. Training in noisy conditions may prompt higher auditory and/or cognitive processing than training in quiet, potentially extending skills to tasks involving challenging listening conditions, such as SIN recognition. These insights hold significant theoretical and clinical implications, potentially advancing the development of effective auditory training protocols.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"28 ","pages":"23312165241275895"},"PeriodicalIF":2.6,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11367600/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142113727","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-01DOI: 10.1177/23312165241285695
Mathieu Lavandier, Lizette Heine, Fabien Perrin
When reproducing sounds over headphones, the simulated source can be externalized (i.e., perceived outside the head) or internalized (i.e., perceived within the head). Is it because it is perceived as more or less distant? To investigate this question, 18 participants evaluated distance and externalization for three types of sound (speech, piano, helicopter) in 27 conditions using nonindividualized stimuli. Distance and externalization ratings were significantly correlated across conditions and listeners, and when averaged across listeners or conditions. However, they were also decoupled in some circumstances: (1) Sound type had different effects on distance and externalization: the helicopter was evaluated as more distant, while speech was judged as less externalized. (2) Distance estimations increased with simulated distances even for stimuli judged as internalized. (3) Diotic reverberation influenced distance but not externalization. Overall, a source was not rated as externalized as soon as and only if its perceived distance exceeded a threshold (e.g., the head radius). These results suggest that distance and externalization are correlated but might not be aspects of a single perceptual continuum. In particular, a virtual source might be judged as both internalized and with a distance. Hence, it could be important to avoid using a scale related to distance when evaluating externalization.
{"title":"Comparing the Auditory Distance and Externalization of Virtual Sound Sources Simulated Using Nonindividualized Stimuli.","authors":"Mathieu Lavandier, Lizette Heine, Fabien Perrin","doi":"10.1177/23312165241285695","DOIUrl":"10.1177/23312165241285695","url":null,"abstract":"<p><p>When reproducing sounds over headphones, the simulated source can be externalized (i.e., perceived outside the head) or internalized (i.e., perceived within the head). Is it because it is perceived as more or less distant? To investigate this question, 18 participants evaluated distance and externalization for three types of sound (speech, piano, helicopter) in 27 conditions using nonindividualized stimuli. Distance and externalization ratings were significantly correlated across conditions and listeners, and when averaged across listeners or conditions. However, they were also decoupled in some circumstances: (1) Sound type had different effects on distance and externalization: the helicopter was evaluated as more distant, while speech was judged as less externalized. (2) Distance estimations increased with simulated distances even for stimuli judged as internalized. (3) Diotic reverberation influenced distance but not externalization. Overall, a source was not rated as externalized as soon as and only if its perceived distance exceeded a threshold (e.g., the head radius). These results suggest that distance and externalization are correlated but might not be aspects of a single perceptual continuum. In particular, a virtual source might be judged as both internalized and with a distance. Hence, it could be important to avoid using a scale related to distance when evaluating externalization.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"28 ","pages":"23312165241285695"},"PeriodicalIF":2.6,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11500226/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142478117","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-01DOI: 10.1177/23312165241259704
Maaike Van Eeckhoutte, Bettina Skjold Jasper, Erik Finn Kjærbøl, David Harbo Jordell, Torsten Dau
The use of in-situ audiometry for hearing aid fitting is appealing due to its reduced resource and equipment requirements compared to standard approaches employing conventional audiometry alongside real-ear measures. However, its validity has been a subject of debate, as previous studies noted differences between hearing thresholds measured using conventional and in-situ audiometry. The differences were particularly notable for open-fit hearing aids, attributed to low-frequency leakage caused by the vent. Here, in-situ audiometry was investigated for six receiver-in-canal hearing aids from different manufacturers through three experiments. In Experiment I, the hearing aid gain was measured to investigate whether corrections were implemented to the prescribed target gain. In Experiment II, the in-situ stimuli were recorded to investigate if corrections were directly incorporated to the delivered in-situ stimulus. Finally, in Experiment III, hearing thresholds using in-situ and conventional audiometry were measured with real patients wearing open-fit hearing aids. Results indicated that (1) the hearing aid gain remained unaffected when measured with in-situ or conventional audiometry for all open-fit measurements, (2) the in-situ stimuli were adjusted for up to 30 dB at frequencies below 1000 Hz for all open-fit hearing aids except one, which also recommends the use of closed domes for all in-situ measurements, and (3) the mean interparticipant threshold difference fell within 5 dB for frequencies between 250 and 6000 Hz. The results clearly indicated that modern measured in-situ thresholds align (within 5 dB) with conventional thresholds measured, indicating the potential of in-situ audiometry for remote hearing care.
在助听器验配中使用现场测听法,与使用传统测听法和真耳测听法的标准方法相比,可减少对资源和设备的需求,因此很有吸引力。然而,由于之前的研究注意到使用传统测听法和现场测听法所测得的听阈之间存在差异,因此现场测听法的有效性一直备受争议。这种差异在开放式助听器上尤为明显,原因是通气孔造成了低频泄漏。在此,我们通过三项实验,对来自不同制造商的六款 "耳道式 "助听器进行了现场测听。在实验 I 中,测量了助听器的增益,以调查是否对规定的目标增益进行了修正。在实验二中,记录了原位刺激,以调查是否直接对提供的原位刺激进行了修正。最后,在实验 III 中,对佩戴开放式助听器的真实患者使用原位和传统测听法测量了听阈。结果表明:(1) 所有开放式助听器在使用原位或传统测听法测量时,助听器增益均不受影响;(2) 除一款开放式助听器外,所有开放式助听器在频率低于 1000 Hz 时,原位刺激均可调整达 30 dB;(3) 在频率介于 250 至 6000 Hz 之间时,参与者之间的平均阈值差异在 5 dB 以内。结果清楚地表明,现代原位测量的阈值与传统测量的阈值一致(在 5 分贝以内),这表明原位测听在远程听力保健方面具有潜力。
{"title":"In-situ Audiometry Compared to Conventional Audiometry for Hearing Aid Fitting.","authors":"Maaike Van Eeckhoutte, Bettina Skjold Jasper, Erik Finn Kjærbøl, David Harbo Jordell, Torsten Dau","doi":"10.1177/23312165241259704","DOIUrl":"10.1177/23312165241259704","url":null,"abstract":"<p><p>The use of in-situ audiometry for hearing aid fitting is appealing due to its reduced resource and equipment requirements compared to standard approaches employing conventional audiometry alongside real-ear measures. However, its validity has been a subject of debate, as previous studies noted differences between hearing thresholds measured using conventional and in-situ audiometry. The differences were particularly notable for open-fit hearing aids, attributed to low-frequency leakage caused by the vent. Here, in-situ audiometry was investigated for six receiver-in-canal hearing aids from different manufacturers through three experiments. In Experiment I, the hearing aid gain was measured to investigate whether corrections were implemented to the prescribed target gain. In Experiment II, the in-situ stimuli were recorded to investigate if corrections were directly incorporated to the delivered in-situ stimulus. Finally, in Experiment III, hearing thresholds using in-situ and conventional audiometry were measured with real patients wearing open-fit hearing aids. Results indicated that (1) the hearing aid gain remained unaffected when measured with in-situ or conventional audiometry for all open-fit measurements, (2) the in-situ stimuli were adjusted for up to 30 dB at frequencies below 1000 Hz for all open-fit hearing aids except one, which also recommends the use of closed domes for all in-situ measurements, and (3) the mean interparticipant threshold difference fell within 5 dB for frequencies between 250 and 6000 Hz. The results clearly indicated that modern measured in-situ thresholds align (within 5 dB) with conventional thresholds measured, indicating the potential of in-situ audiometry for remote hearing care.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"28 ","pages":"23312165241259704"},"PeriodicalIF":2.7,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11155351/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141248830","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In everyday acoustic environments, reverberation alters the speech signal received at the ears. Normal-hearing listeners are robust to these distortions, quickly recalibrating to achieve accurate speech perception. Over the past two decades, multiple studies have investigated the various adaptation mechanisms that listeners use to mitigate the negative impacts of reverberation and improve speech intelligibility. Following the PRISMA guidelines, we performed a systematic review of these studies, with the aim to summarize existing research, identify open questions, and propose future directions. Two researchers independently assessed a total of 661 studies, ultimately including 23 in the review. Our results showed that adaptation to reverberant speech is robust across diverse environments, experimental setups, speech units, and tasks, in noise-masked or unmasked conditions. The time course of adaptation is rapid, sometimes occurring in less than 1 s, but this can vary depending on the reverberation and noise levels of the acoustic environment. Adaptation is stronger in moderately reverberant rooms and minimal in rooms with very intense reverberation. While the mechanisms underlying the recalibration are largely unknown, adaptation to the direct-to-reverberant ratio-related changes in amplitude modulation appears to be the predominant candidate. However, additional factors need to be explored to provide a unified theory for the effect and its applications.
{"title":"Adaptation to Reverberation for Speech Perception: A Systematic Review.","authors":"Avgeris Tsironis, Eleni Vlahou, Panagiota Kontou, Pantelis Bagos, Norbert Kopčo","doi":"10.1177/23312165241273399","DOIUrl":"10.1177/23312165241273399","url":null,"abstract":"<p><p>In everyday acoustic environments, reverberation alters the speech signal received at the ears. Normal-hearing listeners are robust to these distortions, quickly recalibrating to achieve accurate speech perception. Over the past two decades, multiple studies have investigated the various adaptation mechanisms that listeners use to mitigate the negative impacts of reverberation and improve speech intelligibility. Following the PRISMA guidelines, we performed a systematic review of these studies, with the aim to summarize existing research, identify open questions, and propose future directions. Two researchers independently assessed a total of 661 studies, ultimately including 23 in the review. Our results showed that adaptation to reverberant speech is robust across diverse environments, experimental setups, speech units, and tasks, in noise-masked or unmasked conditions. The time course of adaptation is rapid, sometimes occurring in less than 1 s, but this can vary depending on the reverberation and noise levels of the acoustic environment. Adaptation is stronger in moderately reverberant rooms and minimal in rooms with very intense reverberation. While the mechanisms underlying the recalibration are largely unknown, adaptation to the direct-to-reverberant ratio-related changes in amplitude modulation appears to be the predominant candidate. However, additional factors need to be explored to provide a unified theory for the effect and its applications.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"28 ","pages":"23312165241273399"},"PeriodicalIF":2.6,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11384524/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142156433","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-01DOI: 10.1177/23312165241305049
Andres Camarena, Matthew Ardis, Takako Fujioka, Matthew B Fitzgerald, Raymond L Goldsworthy
Cochlear implant (CI) users often complain about music appreciation and speech recognition in background noise, which depend on segregating sound sources into perceptual streams. The present study examined relationships between frequency and fundamental frequency (F0) discrimination with stream segregation of tonal and speech streams for CI users and peers with no known hearing loss. Frequency and F0 discrimination were measured for 1,000 Hz pure tones and 110 Hz complex tones, respectively. Stream segregation was measured for pure and complex tones using a lead/lag delay detection task. Spondee word identification was measured in competing speech with high levels of informational masking that required listeners to use F0 to segregate speech. The hypotheses were that frequency and F0 discrimination would explain a significant portion of the variance in outcomes for tonal segregation and speech reception. On average, CI users received a large benefit for stream segregation of tonal streams when either the frequency or F0 of the competing stream was shifted relative to the target stream. A linear relationship accounted for 42% of the covariance between measures of stream segregation and complex tone discrimination for CI users. In contrast, such benefits were absent when the F0 of the competing speech was shifted relative to the target speech. The large benefit observed for tonal streams is promising for music listening if it transfers to separating instruments within a song; however, the lack of benefit for speech suggests separate mechanisms, or special requirements, for speech processing.
{"title":"The Relationship of Pitch Discrimination with Segregation of Tonal and Speech Streams for Cochlear Implant Users.","authors":"Andres Camarena, Matthew Ardis, Takako Fujioka, Matthew B Fitzgerald, Raymond L Goldsworthy","doi":"10.1177/23312165241305049","DOIUrl":"10.1177/23312165241305049","url":null,"abstract":"<p><p>Cochlear implant (CI) users often complain about music appreciation and speech recognition in background noise, which depend on segregating sound sources into perceptual streams. The present study examined relationships between frequency and fundamental frequency (F0) discrimination with stream segregation of tonal and speech streams for CI users and peers with no known hearing loss. Frequency and F0 discrimination were measured for 1,000 Hz pure tones and 110 Hz complex tones, respectively. Stream segregation was measured for pure and complex tones using a lead/lag delay detection task. Spondee word identification was measured in competing speech with high levels of informational masking that required listeners to use F0 to segregate speech. The hypotheses were that frequency and F0 discrimination would explain a significant portion of the variance in outcomes for tonal segregation and speech reception. On average, CI users received a large benefit for stream segregation of tonal streams when either the frequency or F0 of the competing stream was shifted relative to the target stream. A linear relationship accounted for 42% of the covariance between measures of stream segregation and complex tone discrimination for CI users. In contrast, such benefits were absent when the F0 of the competing speech was shifted relative to the target speech. The large benefit observed for tonal streams is promising for music listening if it transfers to separating instruments within a song; however, the lack of benefit for speech suggests separate mechanisms, or special requirements, for speech processing.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"28 ","pages":"23312165241305049"},"PeriodicalIF":2.6,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11639003/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142819743","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}