Pub Date : 2023-01-01DOI: 10.1177/23312165231191382
Christoph Schmid, Wilhelm Wimmer, Martin Kompis
Matrix sentence tests in noise can be challenging to the listener and time-consuming. A trade-off should be found between testing time, listener's comfort and the precision of the results. Here, a novel test procedure based on an updated maximum likelihood method was developed and implemented in a German matrix sentence test. It determines the parameters of the psychometric function (threshold, slope, and lapse-rate) without constantly challenging the listener at the intelligibility threshold. A so-called "credible interval" was used as a mid-run estimate of reliability and can be used as a termination criterion for the test. The procedure was evaluated and compared to a STAIRCASE procedure in a study with 20 cochlear implant patients and 20 normal hearing participants. The proposed procedure offers comparable accuracy and reliability to the reference method, but with a lower listening effort, as rated by the listeners ( points on a 10-point scale). Test duration can be reduced by 1.3 min on average when a credible interval of 2 dB is used as the termination criterion instead of testing 30 sentences. Particularly, normal hearing listeners and well performing, cochlear implant users can benefit from shorter test duration. Although the novel procedure was developed for a German test, it can easily be applied to tests in any other language.
{"title":"BPACE: A Bayesian, Patient-Centered Procedure for Matrix Speech Tests in Noise.","authors":"Christoph Schmid, Wilhelm Wimmer, Martin Kompis","doi":"10.1177/23312165231191382","DOIUrl":"10.1177/23312165231191382","url":null,"abstract":"<p><p>Matrix sentence tests in noise can be challenging to the listener and time-consuming. A trade-off should be found between testing time, listener's comfort and the precision of the results. Here, a novel test procedure based on an updated maximum likelihood method was developed and implemented in a German matrix sentence test. It determines the parameters of the psychometric function (threshold, slope, and lapse-rate) without constantly challenging the listener at the intelligibility threshold. A so-called \"credible interval\" was used as a mid-run estimate of reliability and can be used as a termination criterion for the test. The procedure was evaluated and compared to a STAIRCASE procedure in a study with 20 cochlear implant patients and 20 normal hearing participants. The proposed procedure offers comparable accuracy and reliability to the reference method, but with a lower listening effort, as rated by the listeners (<math><mo>-</mo><mn>1.8</mn></math> points on a 10-point scale). Test duration can be reduced by 1.3 min on average when a credible interval of 2 dB is used as the termination criterion instead of testing 30 sentences. Particularly, normal hearing listeners and well performing, cochlear implant users can benefit from shorter test duration. Although the novel procedure was developed for a German test, it can easily be applied to tests in any other language.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":null,"pages":null},"PeriodicalIF":2.7,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ftp.ncbi.nlm.nih.gov/pub/pmc/oa_pdf/bf/c9/10.1177_23312165231191382.PMC10388612.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10585624","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01DOI: 10.1177/23312165231211437
Rolph Houben, Ilja Reinten, Wouter A Dreschler, Roland Mathijssen, Tjeerd M H Dijkstra
Preference for noise reduction (NR) strength differs between individuals. The purpose of this study was (1) to investigate whether hearing loss influences this preference, (2) to find the number of distinct settings required to classify participants in similar groups based on their preference for NR strength, and (3) to estimate the number of paired comparisons needed to predict to which preference group a participant belongs. A paired comparison paradigm was used in which participants listened to pairs of speech-in-noise stimuli processed by NR with 10 different strength settings. Participants indicated their preferred sound sample. The 30 participants were divided into three groups according to hearing status (normal hearing, mild hearing loss, and moderate hearing loss). The results showed that (1) participants with moderate hearing loss preferred stronger NR than participants with normal hearing; (2) cluster analysis based solely on the preference for NR strength showed that the data could be described well by dividing the participants into three preference clusters; (3) the appropriate cluster membership could be found with 15 paired comparisons. We conclude that on average, a higher hearing loss is related to a preference for stronger NR, at least for our NR algorithm and our participants. The results show that it might be possible to use a limited set of pre-set NR strengths that can be chosen clinically. For our NR one might use three settings: no NR, intermediate NR, and strong NR. Paired comparisons might be used to find the optimal one of the three settings.
{"title":"Preferred Strength of Noise Reduction for Normally Hearing and Hearing-Impaired Listeners.","authors":"Rolph Houben, Ilja Reinten, Wouter A Dreschler, Roland Mathijssen, Tjeerd M H Dijkstra","doi":"10.1177/23312165231211437","DOIUrl":"10.1177/23312165231211437","url":null,"abstract":"<p><p>Preference for noise reduction (NR) strength differs between individuals. The purpose of this study was (1) to investigate whether hearing loss influences this preference, (2) to find the number of distinct settings required to classify participants in similar groups based on their preference for NR strength, and (3) to estimate the number of paired comparisons needed to predict to which preference group a participant belongs. A paired comparison paradigm was used in which participants listened to pairs of speech-in-noise stimuli processed by NR with 10 different strength settings. Participants indicated their preferred sound sample. The 30 participants were divided into three groups according to hearing status (normal hearing, mild hearing loss, and moderate hearing loss). The results showed that (1) participants with moderate hearing loss preferred stronger NR than participants with normal hearing; (2) cluster analysis based solely on the preference for NR strength showed that the data could be described well by dividing the participants into three preference clusters; (3) the appropriate cluster membership could be found with 15 paired comparisons. We conclude that on average, a higher hearing loss is related to a preference for stronger NR, at least for our NR algorithm and our participants. The results show that it might be possible to use a limited set of pre-set NR strengths that can be chosen clinically. For our NR one might use three settings: no NR, intermediate NR, and strong NR. Paired comparisons might be used to find the optimal one of the three settings.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":null,"pages":null},"PeriodicalIF":2.7,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10666719/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138292131","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01DOI: 10.1177/23312165231160967
Larry E Humes
The National Health Interview Survey (NHIS) data on self-reported trouble hearing and the use of hearing aids were examined for the 12 recent surveys from 2007 to 2018 for adults from 18 to 85+ years of age. The aggregate dataset for all years included data from 357,714 adult respondents. Sample size for annual data ranged from 22,058 (2008) to 36,798 (2014). The prevalence of self-reported trouble hearing and hearing aid use, both current use and ever-using hearing aids, are reported for males and females for each age decade. Measures of unmet hearing healthcare (HHC) need were derived from estimates of the prevalence of hearing aid use among those with self-reported trouble hearing. Logistic-regression analyses identified variables affecting the odds of having self-reported trouble hearing, of using or rejecting hearing aids, and of having unmet HHC needs. The results largely corroborate and extend the findings of recent analyses of data from the National Health and Nutrition Examination Survey (NHANES) for a similar period (2011-2020). Overall, for males, 18.5% (95% CI [18.2%-18.8%]) had self-reported trouble hearing and 76.6% [76.0%-77.2%] of these individuals had never used hearing aids and, for females 13.1% [12.9%-13.4%] had trouble hearing and 79.5% [78.9%-80.1%] of these individuals had never used hearing aids. Unmet HHC needs are highly prevalent in the United States and have been so for many years.
{"title":"U.S. Population Data on Self-Reported Trouble Hearing and Hearing-Aid Use in Adults: National Health Interview Survey, 2007-2018.","authors":"Larry E Humes","doi":"10.1177/23312165231160967","DOIUrl":"https://doi.org/10.1177/23312165231160967","url":null,"abstract":"<p><p>The National Health Interview Survey (NHIS) data on self-reported trouble hearing and the use of hearing aids were examined for the 12 recent surveys from 2007 to 2018 for adults from 18 to 85+ years of age. The aggregate dataset for all years included data from 357,714 adult respondents. Sample size for annual data ranged from 22,058 (2008) to 36,798 (2014). The prevalence of self-reported trouble hearing and hearing aid use, both current use and ever-using hearing aids, are reported for males and females for each age decade. Measures of unmet hearing healthcare (HHC) need were derived from estimates of the prevalence of hearing aid use among those with self-reported trouble hearing. Logistic-regression analyses identified variables affecting the odds of having self-reported trouble hearing, of using or rejecting hearing aids, and of having unmet HHC needs. The results largely corroborate and extend the findings of recent analyses of data from the National Health and Nutrition Examination Survey (NHANES) for a similar period (2011-2020). Overall, for males, 18.5% (95% CI [18.2%-18.8%]) had self-reported trouble hearing and 76.6% [76.0%-77.2%] of these individuals had never used hearing aids and, for females 13.1% [12.9%-13.4%] had trouble hearing and 79.5% [78.9%-80.1%] of these individuals had never used hearing aids. Unmet HHC needs are highly prevalent in the United States and have been so for many years.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":null,"pages":null},"PeriodicalIF":2.7,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ftp.ncbi.nlm.nih.gov/pub/pmc/oa_pdf/1b/e3/10.1177_23312165231160967.PMC10083510.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9515366","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01DOI: 10.1177/23312165231171988
Bernhard Laback
The perceived azimuth of a target sound is determined by the interaural time difference and the interaural level difference (ILD) and is subject to contextual effects from precursor sounds. This study characterized ILD-based precursor effects (PEs) for high-frequency stimuli in a total of seven normal-hearing listeners. In Experiment 1, precursor and target were band-pass-filtered noises approximately centered at 4 kHz (1.2- and 1-octave bandwidth, respectively) separated by a 10-ms gap. The effects of precursor location (ipsilateral, contralateral, and central) on the perceived target azimuth were measured using a head-pointing task. Relative to control trials without a precursor, ipsilateral precursors biased the perceived target azimuth toward midline (medial bias) and contralateral precursors biased it contralaterally (lateral bias). Central precursors caused a symmetric lateral bias. An auditory periphery model that determines the "internal" ILD at the auditory nerve level, including either realistic efferent compression control or auditory nerve adaptation, explained about 50% of the variance in the PEs. These within-trial PEs were accompanied by an across-trial PE, inducing medial bias. Experiment 2 studied the role of sequential segregation in the within-trial PE by introducing a pitch difference between precursor and target. Segregation conditions caused increased PE for ipsilateral, no effect for contralateral, and either no effect or reduced PE for central precursors. Overall, the ILD-based within-trial PE appears to be preshaped already in the auditory periphery and the mechanism underlying at least the ipsilateral PE appears to be immune against sequential segregation.
目标声音的感知方位角由耳间时差和耳间电平差(ILD)决定,并受前导声音的上下文影响。本研究对七名听力正常的听者进行了基于 ILD 的高频刺激前兆效应(PEs)研究。在实验 1 中,前导声和目标声都是经过带通滤波的噪音,大约以 4 kHz 为中心(带宽分别为 1.2 倍频程和 1 倍频程),中间间隔 10 毫秒。前体位置(同侧、对侧和中央)对感知目标方位角的影响是通过头部指向任务来测量的。相对于没有前兆的对照试验,同侧前兆使感知到的目标方位角偏向中线(内侧偏向),而对侧前兆使感知到的目标方位角偏向对侧(外侧偏向)。中央前驱体会造成对称的外侧偏向。听觉外围模型决定了听觉神经水平的 "内部 "ILD,包括现实的传出压缩控制或听觉神经适应,该模型解释了约 50% 的 PE 变异。这些试验内PE伴随着跨试验PE,诱发了内侧偏差。实验 2 通过在前导音和目标音之间引入音高差,研究了序列分离在试内 PE 中的作用。在分离条件下,同侧前体的 PE 增加,对侧前体的 PE 没有影响,而对中央前体的 PE 没有影响或减少。总之,基于 ILD 的审限内 PE 似乎已经在听觉外围预先成形,至少同侧 PE 的基本机制似乎对顺序分离具有免疫力。
{"title":"Contextual Lateralization Based on Interaural Level Differences Is Preshaped by the Auditory Periphery and Predominantly Immune Against Sequential Segregation.","authors":"Bernhard Laback","doi":"10.1177/23312165231171988","DOIUrl":"10.1177/23312165231171988","url":null,"abstract":"<p><p>The perceived azimuth of a target sound is determined by the interaural time difference and the interaural level difference (ILD) and is subject to contextual effects from precursor sounds. This study characterized ILD-based precursor effects (PEs) for high-frequency stimuli in a total of seven normal-hearing listeners. In Experiment 1, precursor and target were band-pass-filtered noises approximately centered at 4 kHz (1.2- and 1-octave bandwidth, respectively) separated by a 10-ms gap. The effects of precursor location (ipsilateral, contralateral, and central) on the perceived target azimuth were measured using a head-pointing task. Relative to control trials without a precursor, ipsilateral precursors biased the perceived target azimuth toward midline (medial bias) and contralateral precursors biased it contralaterally (lateral bias). Central precursors caused a symmetric lateral bias. An auditory periphery model that determines the \"internal\" ILD at the auditory nerve level, including either realistic efferent compression control or auditory nerve adaptation, explained about 50% of the variance in the PEs. These within-trial PEs were accompanied by an across-trial PE, inducing medial bias. Experiment 2 studied the role of sequential segregation in the within-trial PE by introducing a pitch difference between precursor and target. Segregation conditions caused increased PE for ipsilateral, no effect for contralateral, and either no effect or reduced PE for central precursors. Overall, the ILD-based within-trial PE appears to be preshaped already in the auditory periphery and the mechanism underlying at least the ipsilateral PE appears to be immune against sequential segregation.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":null,"pages":null},"PeriodicalIF":2.7,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ftp.ncbi.nlm.nih.gov/pub/pmc/oa_pdf/7e/db/10.1177_23312165231171988.PMC10185981.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9516408","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01DOI: 10.1177/23312165231168741
Sudeep Surendran, Srdan Prodanovic, Stefan Stenfelt
Bone conduction (BC) stimulation has mainly been used for clinical hearing assessment and hearing aids where stimulation is applied at the mastoid behind the ear. Recently, BC has become popular for communication headsets where the stimulation position often is close to the anterior part of the ear canal opening. The BC sound transmission for this stimulation position is here investigated in 21 participants by ear canal sound pressure measurements and hearing threshold assessment as well as simulations in the LiUHead. The results indicated that a stimulation position close to the ear canal opening improves the sensitivity for BC sound by around 20 dB but by up to 40 dB at some frequencies. The transcranial transmission ranges typically between -40 and -25 dB. This decreased transcranial transmission facilitates saliency of binaural cues and implies that BC headsets are suitable for virtual and augmented reality applications. The findings suggest that with BC stimulation close to the ear canal opening, the sound pressure in the ear canal dominates the perception of BC sound. With this stimulation, the ear canal pathway was estimated to be around 25 dB greater than other contributors, like skull bone vibrations, for hearing BC sound in a healthy ear. This increased contribution from the ear canal sound pressure to BC hearing means that a position close to the ear canal is not appropriate for clinical use since, in such case, a conductive hearing loss affects BC and air conduction thresholds by a similar amount.
{"title":"Hearing Through Bone Conduction Headsets.","authors":"Sudeep Surendran, Srdan Prodanovic, Stefan Stenfelt","doi":"10.1177/23312165231168741","DOIUrl":"https://doi.org/10.1177/23312165231168741","url":null,"abstract":"<p><p>Bone conduction (BC) stimulation has mainly been used for clinical hearing assessment and hearing aids where stimulation is applied at the mastoid behind the ear. Recently, BC has become popular for communication headsets where the stimulation position often is close to the anterior part of the ear canal opening. The BC sound transmission for this stimulation position is here investigated in 21 participants by ear canal sound pressure measurements and hearing threshold assessment as well as simulations in the LiUHead. The results indicated that a stimulation position close to the ear canal opening improves the sensitivity for BC sound by around 20 dB but by up to 40 dB at some frequencies. The transcranial transmission ranges typically between -40 and -25 dB. This decreased transcranial transmission facilitates saliency of binaural cues and implies that BC headsets are suitable for virtual and augmented reality applications. The findings suggest that with BC stimulation close to the ear canal opening, the sound pressure in the ear canal dominates the perception of BC sound. With this stimulation, the ear canal pathway was estimated to be around 25 dB greater than other contributors, like skull bone vibrations, for hearing BC sound in a healthy ear. This increased contribution from the ear canal sound pressure to BC hearing means that a position close to the ear canal is not appropriate for clinical use since, in such case, a conductive hearing loss affects BC and air conduction thresholds by a similar amount.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":null,"pages":null},"PeriodicalIF":2.7,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ftp.ncbi.nlm.nih.gov/pub/pmc/oa_pdf/54/04/10.1177_23312165231168741.PMC10126703.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9885322","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01DOI: 10.1177/23312165231192290
Chengshi Zheng, Chenyang Xu, Meihuang Wang, Xiaodong Li, Brian C J Moore
Speech and music both play fundamental roles in daily life. Speech is important for communication while music is important for relaxation and social interaction. Both speech and music have a large dynamic range. This does not pose problems for listeners with normal hearing. However, for hearing-impaired listeners, elevated hearing thresholds may result in low-level portions of sound being inaudible. Hearing aids with frequency-dependent amplification and amplitude compression can partly compensate for this problem. However, the gain required for low-level portions of sound to compensate for the hearing loss can be larger than the maximum stable gain of a hearing aid, leading to acoustic feedback. Feedback control is used to avoid such instability, but this can lead to artifacts, especially when the gain is only just below the maximum stable gain. We previously proposed a deep-learning method called DeepMFC for controlling feedback and reducing artifacts and showed that when the sound source was speech DeepMFC performed much better than traditional approaches. However, its performance using music as the sound source was not assessed and the way in which it led to improved performance for speech was not determined. The present paper reveals how DeepMFC addresses feedback problems and evaluates DeepMFC using speech and music as sound sources with both objective and subjective measures. DeepMFC achieved good performance for both speech and music when it was trained with matched training materials. When combined with an adaptive feedback canceller it provided over 13 dB of additional stable gain for hearing-impaired listeners.
{"title":"Evaluation of deep marginal feedback cancellation for hearing aids using speech and music.","authors":"Chengshi Zheng, Chenyang Xu, Meihuang Wang, Xiaodong Li, Brian C J Moore","doi":"10.1177/23312165231192290","DOIUrl":"10.1177/23312165231192290","url":null,"abstract":"<p><p>Speech and music both play fundamental roles in daily life. Speech is important for communication while music is important for relaxation and social interaction. Both speech and music have a large dynamic range. This does not pose problems for listeners with normal hearing. However, for hearing-impaired listeners, elevated hearing thresholds may result in low-level portions of sound being inaudible. Hearing aids with frequency-dependent amplification and amplitude compression can partly compensate for this problem. However, the gain required for low-level portions of sound to compensate for the hearing loss can be larger than the maximum stable gain of a hearing aid, leading to acoustic feedback. Feedback control is used to avoid such instability, but this can lead to artifacts, especially when the gain is only just below the maximum stable gain. We previously proposed a deep-learning method called DeepMFC for controlling feedback and reducing artifacts and showed that when the sound source was speech DeepMFC performed much better than traditional approaches. However, its performance using music as the sound source was not assessed and the way in which it led to improved performance for speech was not determined. The present paper reveals how DeepMFC addresses feedback problems and evaluates DeepMFC using speech and music as sound sources with both objective and subjective measures. DeepMFC achieved good performance for both speech and music when it was trained with matched training materials. When combined with an adaptive feedback canceller it provided over 13 dB of additional stable gain for hearing-impaired listeners.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":null,"pages":null},"PeriodicalIF":2.7,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ftp.ncbi.nlm.nih.gov/pub/pmc/oa_pdf/a3/70/10.1177_23312165231192290.PMC10408330.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9962545","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01DOI: 10.1177/23312165231195987
Oliver Zobay, Graham Naylor, Gabrielle H Saunders, Lauren K Dillard
Longitudinal electronic health records from a large sample of new hearing-aid (HA) recipients in the US Veterans Affairs healthcare system were used to evaluate associations of fitting laterality with long-term HA use persistence as measured by battery order records, as well as with short-term HA use and satisfaction as assessed using the International Outcome Inventory for Hearing Aids (IOI-HA), completed within 180 days of HA fitting. The large size of our dataset allowed us to address two aspects of fitting laterality that have not received much attention, namely the degree of hearing asymmetry and the question of which ear to fit if fitting unilaterally. The key findings were that long-term HA use persistence was considerably lower for unilateral fittings for symmetric hearing loss (HL) and for unilateral worse-ear fittings for asymmetric HL, as compared to bilateral and unilateral better-ear fittings. In contrast, no differences across laterality categories were observed for short-term self-reported HA usage. Total IOI-HA score was poorer for unilateral fittings of symmetric HL and for unilateral better-ear fittings compared to bilateral for asymmetric HL. We thus conclude that bilateral fittings yield the best short- and long-term outcomes, and while unilateral and bilateral fittings can result in similar outcomes on some measures, we did not identify any HL configuration for which a bilateral fitting would lead to poorer outcomes. However, if a single HA is to be fitted, then our results indicate that a better-ear fitting has a higher probability of long-term HA use persistence than a worse-ear fitting.
{"title":"Fitting a Hearing Aid on the Better Ear, Worse Ear, or Both: Associations of Hearing-aid Fitting Laterality with Outcomes in a Large Sample of US Veterans.","authors":"Oliver Zobay, Graham Naylor, Gabrielle H Saunders, Lauren K Dillard","doi":"10.1177/23312165231195987","DOIUrl":"10.1177/23312165231195987","url":null,"abstract":"<p><p>Longitudinal electronic health records from a large sample of new hearing-aid (HA) recipients in the US Veterans Affairs healthcare system were used to evaluate associations of fitting laterality with long-term HA use persistence as measured by battery order records, as well as with short-term HA use and satisfaction as assessed using the International Outcome Inventory for Hearing Aids (IOI-HA), completed within 180 days of HA fitting. The large size of our dataset allowed us to address two aspects of fitting laterality that have not received much attention, namely the degree of hearing asymmetry and the question of which ear to fit if fitting unilaterally. The key findings were that long-term HA use persistence was considerably lower for unilateral fittings for symmetric hearing loss (HL) and for unilateral worse-ear fittings for asymmetric HL, as compared to bilateral and unilateral better-ear fittings. In contrast, no differences across laterality categories were observed for short-term self-reported HA usage. Total IOI-HA score was poorer for unilateral fittings of symmetric HL and for unilateral better-ear fittings compared to bilateral for asymmetric HL. We thus conclude that bilateral fittings yield the best short- and long-term outcomes, and while unilateral and bilateral fittings can result in similar outcomes on some measures, we did not identify any HL configuration for which a bilateral fitting would lead to poorer outcomes. However, if a single HA is to be fitted, then our results indicate that a better-ear fitting has a higher probability of long-term HA use persistence than a worse-ear fitting.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":null,"pages":null},"PeriodicalIF":2.7,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10467180/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10481783","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01DOI: 10.1177/23312165231182518
Michael A Stone, Melanie Lough, Keith Wilbraham, Helen Whiston, Harvey Dillon
Remote microphones (RMs) enable clearer reception of speech than would be normally achievable when relying on the acoustic sound field at the listener's ear (Hawkins, J Sp Hear Disord 49, 409-418, 1984). They are used in a wide range of environments, with one example being for children in educational settings. The international standards defining the assessment methods of the technical performance of RMs rely on free-field (anechoic) delivery, a rarely met acoustic scenario. Although some work has been offered on more real-world testing (Husstedt et al., Int J Audiol 61, 34-45. 2022), the area remains under-investigated. The electroacoustic performance of five RMs in a low-reverberation room was compared in order to assess just the RM link, rather than measurements at the end of the signal chain, for example, speech intelligibility in human observers. It pilots physical- and electro-acoustic measures to characterize the performance of RMs. The measures are based on those found in the IEC 60118 standards relating to hearing aids, but modified for diffuse-field delivery, as well as adaptive signal processing. Speech intelligibility and quality are assessed by computer models. Noise bands were often processed into irrelevance by adaptive systems that could not be deactivated. Speech-related signals were more successful. The five RMs achieved similar levels of good predicted intelligibility, for each of two background noise levels. The main difference observed was in the transmission delay between microphone and ear. This ranged between 40 and 50 ms in two of the systems, on the upper edge of acceptability necessary for audio-visual synchrony.
{"title":"Toward a Real-World Technical Test Battery for Remote Microphone Systems Used with Hearing Prostheses.","authors":"Michael A Stone, Melanie Lough, Keith Wilbraham, Helen Whiston, Harvey Dillon","doi":"10.1177/23312165231182518","DOIUrl":"https://doi.org/10.1177/23312165231182518","url":null,"abstract":"<p><p>Remote microphones (RMs) enable clearer reception of speech than would be normally achievable when relying on the acoustic sound field at the listener's ear (Hawkins, J Sp Hear Disord 49, 409-418, 1984). They are used in a wide range of environments, with one example being for children in educational settings. The international standards defining the assessment methods of the technical performance of RMs rely on free-field (anechoic) delivery, a rarely met acoustic scenario. Although some work has been offered on more real-world testing (Husstedt et al., Int J Audiol 61, 34-45. 2022), the area remains under-investigated. The electroacoustic performance of five RMs in a low-reverberation room was compared in order to assess just the RM link, rather than measurements at the end of the signal chain, for example, speech intelligibility in human observers. It pilots physical- and electro-acoustic measures to characterize the performance of RMs. The measures are based on those found in the IEC 60118 standards relating to hearing aids, but modified for diffuse-field delivery, as well as adaptive signal processing. Speech intelligibility and quality are assessed by computer models. Noise bands were often processed into irrelevance by adaptive systems that could not be deactivated. Speech-related signals were more successful. The five RMs achieved similar levels of good predicted intelligibility, for each of two background noise levels. The main difference observed was in the transmission delay between microphone and ear. This ranged between 40 and 50 ms in two of the systems, on the upper edge of acceptability necessary for audio-visual synchrony.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":null,"pages":null},"PeriodicalIF":2.7,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10345919/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9813516","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01DOI: 10.1177/23312165231170501
Tanmayee Pathre, Jeremy Marozeau
Several studies have established that Cochlear implant (CI) listeners rely on the tempo of music to judge the emotional content of music. However, a re-analysis of a study in which CI listeners judged the emotion conveyed by piano pieces on a scale from happy to sad revealed a weak correlation between tempo and emotion. The present study explored which temporal cues in music influence emotion judgments among normal hearing (NH) listeners, which might provide insights into the cues utilized by CI listeners. Experiment 1 was a replication of the Vannson et al. study with NH listeners using rhythmic patterns of piano created with congas. The temporal cues were preserved while the tonal ones were removed. The results showed (i) tempo was weakly correlated with emotion judgments, (ii) NH listeners' judgments for congas were similar to CI listeners' judgments for piano. In Experiment 2, two tasks were administered with congas played at three different tempi: emotion judgment and a tapping task to record listeners' perceived tempo. Perceived tempo was a better predictor than the tempo, but its physical correlate, mean onset-to-onset difference (MOOD), a measure of the average time between notes, yielded higher correlations with NH listeners' emotion judgments. This result suggests that instead of the tempo, listeners rely on the average time between consecutive notes to judge the emotional content of music. CI listeners could utilize this cue to judge the emotional content of music.
{"title":"Temporal Cues in the Judgment of Music Emotion for Normal and Cochlear Implant Listeners.","authors":"Tanmayee Pathre, Jeremy Marozeau","doi":"10.1177/23312165231170501","DOIUrl":"https://doi.org/10.1177/23312165231170501","url":null,"abstract":"<p><p>Several studies have established that Cochlear implant (CI) listeners rely on the tempo of music to judge the emotional content of music. However, a re-analysis of a study in which CI listeners judged the emotion conveyed by piano pieces on a scale from happy to sad revealed a weak correlation between tempo and emotion. The present study explored which temporal cues in music influence emotion judgments among normal hearing (NH) listeners, which might provide insights into the cues utilized by CI listeners. Experiment 1 was a replication of the Vannson et al. study with NH listeners using rhythmic patterns of piano created with congas. The temporal cues were preserved while the tonal ones were removed. The results showed (i) tempo was weakly correlated with emotion judgments, (ii) NH listeners' judgments for congas were similar to CI listeners' judgments for piano. In Experiment 2, two tasks were administered with congas played at three different tempi: emotion judgment and a tapping task to record listeners' perceived tempo. Perceived tempo was a better predictor than the tempo, but its physical correlate, mean onset-to-onset difference (MOOD), a measure of the average time between notes, yielded higher correlations with NH listeners' emotion judgments. This result suggests that instead of the tempo, listeners rely on the average time between consecutive notes to judge the emotional content of music. CI listeners could utilize this cue to judge the emotional content of music.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":null,"pages":null},"PeriodicalIF":2.7,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ftp.ncbi.nlm.nih.gov/pub/pmc/oa_pdf/e3/96/10.1177_23312165231170501.PMC10134148.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9868942","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01DOI: 10.1177/23312165221138390
Tushar Verma, Scott C Aker, Jeremy Marozeau
The study tests the hypothesis that vibrotactile stimulation can affect timbre perception. A multidimensional scaling experiment was conducted. Twenty listeners with normal hearing and nine cochlear implant users were asked to judge the dissimilarity of a set of synthetic sounds that varied in attack time and amplitude modulation depth. The listeners were simultaneously presented with vibrotactile stimuli, which varied also in attack time and amplitude modulation depth. The results showed that alterations to the temporal waveform of the tactile stimuli affected the listeners' dissimilarity judgments of the audio. A three-dimensional analysis revealed evidence of crossmodal processing where the audio and tactile equivalents combined accounted for their dissimilarity judgments. For the normal-hearing listeners, 86% of the first dimension was explained by audio impulsiveness and 14% by tactile impulsiveness; 75% of the second dimension was explained by the audio roughness or fast amplitude modulation, while its tactile counterpart explained 25%. Interestingly, the third dimension revealed a combination of 43% of audio impulsiveness and 57% of tactile amplitude modulation. For the CI listeners, the first dimension was mostly accounted for by the tactile roughness and the second by the audio impulsiveness. This experiment shows that the perception of timbre can be affected by tactile input and could lead to the developing of new audio-tactile devices for people with hearing impairment.
{"title":"Effect of Vibrotactile Stimulation on Auditory Timbre Perception for Normal-Hearing Listeners and Cochlear-Implant Users.","authors":"Tushar Verma, Scott C Aker, Jeremy Marozeau","doi":"10.1177/23312165221138390","DOIUrl":"https://doi.org/10.1177/23312165221138390","url":null,"abstract":"<p><p>The study tests the hypothesis that vibrotactile stimulation can affect timbre perception. A multidimensional scaling experiment was conducted. Twenty listeners with normal hearing and nine cochlear implant users were asked to judge the dissimilarity of a set of synthetic sounds that varied in attack time and amplitude modulation depth. The listeners were simultaneously presented with vibrotactile stimuli, which varied also in attack time and amplitude modulation depth. The results showed that alterations to the temporal waveform of the tactile stimuli affected the listeners' dissimilarity judgments of the audio. A three-dimensional analysis revealed evidence of crossmodal processing where the audio and tactile equivalents combined accounted for their dissimilarity judgments. For the normal-hearing listeners, 86% of the first dimension was explained by audio impulsiveness and 14% by tactile impulsiveness; 75% of the second dimension was explained by the audio roughness or fast amplitude modulation, while its tactile counterpart explained 25%. Interestingly, the third dimension revealed a combination of 43% of audio impulsiveness and 57% of tactile amplitude modulation. For the CI listeners, the first dimension was mostly accounted for by the tactile roughness and the second by the audio impulsiveness. This experiment shows that the perception of timbre can be affected by tactile input and could lead to the developing of new audio-tactile devices for people with hearing impairment.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":null,"pages":null},"PeriodicalIF":2.7,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ftp.ncbi.nlm.nih.gov/pub/pmc/oa_pdf/24/be/10.1177_23312165221138390.PMC9932763.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10748380","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}