Pub Date : 2023-01-01DOI: 10.1177/23312165231188619
Ľuboš Hládek, Bernhard U Seeber
Speech intelligibility in cocktail party situations has been traditionally studied for stationary sound sources and stationary participants. Here, speech intelligibility and behavior were investigated during active self-rotation of standing participants in a spatialized speech test. We investigated if people would rotate to improve speech intelligibility, and we asked if knowing the target location would be further beneficial. Target sentences randomly appeared at one of four possible locations: 0°, ± 90°, 180° relative to the participant's initial orientation on each trial, while speech-shaped noise was presented from the front (0°). Participants responded naturally with self-rotating motion. Target sentences were presented either without (Audio-only) or with a picture of an avatar (Audio-Visual). In a baseline (Static) condition, people were standing still without visual location cues. Participants' self-orientation undershot the target location and orientations were close to acoustically optimal. Participants oriented more often in an acoustically optimal way, and speech intelligibility was higher in the Audio-Visual than in the Audio-only condition for the lateral targets. The intelligibility of the individual words in Audio-Visual and Audio-only increased during self-rotation towards the rear target, but it was reduced for the lateral targets when compared to Static, which could be mostly, but not fully, attributed to changes in spatial unmasking. Speech intelligibility prediction based on a model of static spatial unmasking considering self-rotations overestimated the participant performance by 1.4 dB. The results suggest that speech intelligibility is reduced during self-rotation, and that visual cues of location help to achieve more optimal self-rotations and better speech intelligibility.
{"title":"Speech Intelligibility in Reverberation is Reduced During Self-Rotation.","authors":"Ľuboš Hládek, Bernhard U Seeber","doi":"10.1177/23312165231188619","DOIUrl":"10.1177/23312165231188619","url":null,"abstract":"<p><p>Speech intelligibility in cocktail party situations has been traditionally studied for stationary sound sources and stationary participants. Here, speech intelligibility and behavior were investigated during active self-rotation of standing participants in a spatialized speech test. We investigated if people would rotate to improve speech intelligibility, and we asked if knowing the target location would be further beneficial. Target sentences randomly appeared at one of four possible locations: 0°, ± 90°, 180° relative to the participant's initial orientation on each trial, while speech-shaped noise was presented from the front (0°). Participants responded naturally with self-rotating motion. Target sentences were presented either without (Audio-only) or with a picture of an avatar (Audio-Visual). In a baseline (Static) condition, people were standing still without visual location cues. Participants' self-orientation undershot the target location and orientations were close to acoustically optimal. Participants oriented more often in an acoustically optimal way, and speech intelligibility was higher in the Audio-Visual than in the Audio-only condition for the lateral targets. The intelligibility of the individual words in Audio-Visual and Audio-only increased during self-rotation towards the rear target, but it was reduced for the lateral targets when compared to Static, which could be mostly, but not fully, attributed to changes in spatial unmasking. Speech intelligibility prediction based on a model of static spatial unmasking considering self-rotations overestimated the participant performance by 1.4 dB. The results suggest that speech intelligibility is reduced during self-rotation, and that visual cues of location help to achieve more optimal self-rotations and better speech intelligibility.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"27 ","pages":"23312165231188619"},"PeriodicalIF":2.7,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10363862/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9872318","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01DOI: 10.1177/23312165231154035
Michael Alexander Chesnaye, Steven Lewis Bell, James Michael Harte, Lisbeth Birkelund Simonsen, Anisa Sadru Visram, Michael Anthony Stone, Kevin James Munro, David Martin Simpson
The cortical auditory evoked potential (CAEP) is a change in neural activity in response to sound, and is of interest for audiological assessment of infants, especially those who use hearing aids. Within this population, CAEP waveforms are known to vary substantially across individuals, which makes detecting the CAEP through visual inspection a challenging task. It also means that some of the best automated CAEP detection methods used in adults are probably not suitable for this population. This study therefore evaluates and optimizes the performance of new and existing methods for aided (i.e., the stimuli are presented through subjects' hearing aid(s)) CAEP detection in infants with hearing loss. Methods include the conventional Hotellings T2 test, various modified q-sample statistics, and two novel variants of T2 statistics, which were designed to exploit the correlation structure underlying the data. Various additional methods from the literature were also evaluated, including the previously best-performing methods for adult CAEP detection. Data for the assessment consisted of aided CAEPs recorded from 59 infant hearing aid users with mild to profound bilateral hearing loss, and simulated signals. The highest test sensitivities were observed for the modified T2 statistics, followed by the modified q-sample statistics, and lastly by the conventional Hotelling's T2 test, which showed low detection rates for ensemble sizes <80 epochs. The high test sensitivities at small ensemble sizes observed for the modified T2 and q-sample statistics are especially relevant for infant testing, as the time available for data collection tends to be limited in this population.
{"title":"Modified T<sup>2</sup> Statistics for Improved Detection of Aided Cortical Auditory Evoked Potentials in Hearing-Impaired Infants.","authors":"Michael Alexander Chesnaye, Steven Lewis Bell, James Michael Harte, Lisbeth Birkelund Simonsen, Anisa Sadru Visram, Michael Anthony Stone, Kevin James Munro, David Martin Simpson","doi":"10.1177/23312165231154035","DOIUrl":"10.1177/23312165231154035","url":null,"abstract":"<p><p>The cortical auditory evoked potential (CAEP) is a change in neural activity in response to sound, and is of interest for audiological assessment of infants, especially those who use hearing aids. Within this population, CAEP waveforms are known to vary substantially across individuals, which makes detecting the CAEP through visual inspection a challenging task. It also means that some of the best automated CAEP detection methods used in adults are probably not suitable for this population. This study therefore evaluates and optimizes the performance of new and existing methods for aided (i.e., the stimuli are presented through subjects' hearing aid(s)) CAEP detection in infants with hearing loss. Methods include the conventional Hotellings T<sup>2</sup> test, various modified q-sample statistics, and two novel variants of T<sup>2</sup> statistics, which were designed to exploit the correlation structure underlying the data. Various additional methods from the literature were also evaluated, including the previously best-performing methods for adult CAEP detection. Data for the assessment consisted of aided CAEPs recorded from 59 infant hearing aid users with mild to profound bilateral hearing loss, and simulated signals. The highest test sensitivities were observed for the modified T<sup>2</sup> statistics, followed by the modified q-sample statistics, and lastly by the conventional Hotelling's T<sup>2</sup> test, which showed low detection rates for ensemble sizes <80 epochs. The high test sensitivities at small ensemble sizes observed for the modified T<sup>2</sup> and q-sample statistics are especially relevant for infant testing, as the time available for data collection tends to be limited in this population.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"27 ","pages":"23312165231154035"},"PeriodicalIF":2.6,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9974628/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10828646","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01DOI: 10.1177/23312165231182289
Chiara Valzolgher, Mariam Alzaher, Valérie Gaveau, Aurélie Coudert, Mathieu Marx, Eric Truy, Pascal Barone, Alessandro Farnè, Francesco Pavani
Lateralized sounds can orient visual attention, with benefits for audio-visual processing. Here, we asked to what extent perturbed auditory spatial cues-resulting from cochlear implants (CI) or unilateral hearing loss (uHL)-allow this automatic mechanism of information selection from the audio-visual environment. We used a classic paradigm from experimental psychology (capture of visual attention with sounds) to probe the integrity of audio-visual attentional orienting in 60 adults with hearing loss: bilateral CI users (N = 20), unilateral CI users (N = 20), and individuals with uHL (N = 20). For comparison, we also included a group of normal-hearing (NH, N = 20) participants, tested in binaural and monaural listening conditions (i.e., with one ear plugged). All participants also completed a sound localization task to assess spatial hearing skills. Comparable audio-visual orienting was observed in bilateral CI, uHL, and binaural NH participants. By contrast, audio-visual orienting was, on average, absent in unilateral CI users and reduced in NH listening with one ear plugged. Spatial hearing skills were better in bilateral CI, uHL, and binaural NH participants than in unilateral CI users and monaurally plugged NH listeners. In unilateral CI users, spatial hearing skills correlated with audio-visual-orienting abilities. These novel results show that audio-visual-attention orienting can be preserved in bilateral CI users and in uHL patients to a greater extent than unilateral CI users. This highlights the importance of assessing the impact of hearing loss beyond auditory difficulties alone: to capture to what extent it may enable or impede typical interactions with the multisensory environment.
{"title":"Capturing Visual Attention With Perturbed Auditory Spatial Cues.","authors":"Chiara Valzolgher, Mariam Alzaher, Valérie Gaveau, Aurélie Coudert, Mathieu Marx, Eric Truy, Pascal Barone, Alessandro Farnè, Francesco Pavani","doi":"10.1177/23312165231182289","DOIUrl":"10.1177/23312165231182289","url":null,"abstract":"<p><p>Lateralized sounds can orient visual attention, with benefits for audio-visual processing. Here, we asked to what extent perturbed auditory spatial cues-resulting from cochlear implants (CI) or unilateral hearing loss (uHL)-allow this automatic mechanism of information selection from the audio-visual environment. We used a classic paradigm from experimental psychology (capture of visual attention with sounds) to probe the integrity of audio-visual attentional orienting in 60 adults with hearing loss: bilateral CI users (<i>N</i> = 20), unilateral CI users (<i>N</i> = 20), and individuals with uHL (<i>N</i> = 20). For comparison, we also included a group of normal-hearing (NH, <i>N</i> = 20) participants, tested in binaural and monaural listening conditions (i.e., with one ear plugged). All participants also completed a sound localization task to assess spatial hearing skills. Comparable audio-visual orienting was observed in bilateral CI, uHL, and binaural NH participants. By contrast, audio-visual orienting was, on average, absent in unilateral CI users and reduced in NH listening with one ear plugged. Spatial hearing skills were better in bilateral CI, uHL, and binaural NH participants than in unilateral CI users and monaurally plugged NH listeners. In unilateral CI users, spatial hearing skills correlated with audio-visual-orienting abilities. These novel results show that audio-visual-attention orienting can be preserved in bilateral CI users and in uHL patients to a greater extent than unilateral CI users. This highlights the importance of assessing the impact of hearing loss beyond auditory difficulties alone: to capture to what extent it may enable or impede typical interactions with the multisensory environment.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"27 ","pages":"23312165231182289"},"PeriodicalIF":2.7,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ftp.ncbi.nlm.nih.gov/pub/pmc/oa_pdf/84/a2/10.1177_23312165231182289.PMC10467228.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10127241","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01DOI: 10.1177/23312165221148022
Sina Tahmasebi, Manuel Segovia-Martinez, Waldo Nogueira
Cochlear implants (CIs) are implantable medical devices that can partially restore hearing to people suffering from profound sensorineural hearing loss. While these devices provide good speech understanding in quiet, many CI users face difficulties when listening to music. Reasons include poor spatial specificity of electric stimulation, limited transmission of spectral and temporal fine structure of acoustic signals, and restrictions in the dynamic range that can be conveyed via electric stimulation of the auditory nerve. The coding strategies currently used in CIs are typically designed for speech rather than music. This work investigates the optimization of CI coding strategies to make singing music more accessible to CI users. The aim is to reduce the spectral complexity of music by selecting fewer bands for stimulation, attenuating the background instruments by strengthening a noise reduction algorithm, and optimizing the electric dynamic range through a back-end compressor. The optimizations were evaluated through both objective and perceptual measures of speech understanding and melody identification of singing voice with and without background instruments, as well as music appreciation questionnaires. Consistent with the objective measures, results gathered from the perceptual evaluations indicated that reducing the number of selected bands and optimizing the electric dynamic range significantly improved speech understanding in music. Moreover, results obtained from questionnaires show that the new music back-end compressor significantly improved music enjoyment. These results have potential as a new CI program for improved singing music perception.
{"title":"Optimization of Sound Coding Strategies to Make Singing Music More Accessible for Cochlear Implant Users.","authors":"Sina Tahmasebi, Manuel Segovia-Martinez, Waldo Nogueira","doi":"10.1177/23312165221148022","DOIUrl":"https://doi.org/10.1177/23312165221148022","url":null,"abstract":"<p><p>Cochlear implants (CIs) are implantable medical devices that can partially restore hearing to people suffering from profound sensorineural hearing loss. While these devices provide good speech understanding in quiet, many CI users face difficulties when listening to music. Reasons include poor spatial specificity of electric stimulation, limited transmission of spectral and temporal fine structure of acoustic signals, and restrictions in the dynamic range that can be conveyed via electric stimulation of the auditory nerve. The coding strategies currently used in CIs are typically designed for speech rather than music. This work investigates the optimization of CI coding strategies to make singing music more accessible to CI users. The aim is to reduce the spectral complexity of music by selecting fewer bands for stimulation, attenuating the background instruments by strengthening a noise reduction algorithm, and optimizing the electric dynamic range through a back-end compressor. The optimizations were evaluated through both objective and perceptual measures of speech understanding and melody identification of singing voice with and without background instruments, as well as music appreciation questionnaires. Consistent with the objective measures, results gathered from the perceptual evaluations indicated that reducing the number of selected bands and optimizing the electric dynamic range significantly improved speech understanding in music. Moreover, results obtained from questionnaires show that the new music back-end compressor significantly improved music enjoyment. These results have potential as a new CI program for improved singing music perception.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"27 ","pages":"23312165221148022"},"PeriodicalIF":2.7,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ftp.ncbi.nlm.nih.gov/pub/pmc/oa_pdf/a4/9b/10.1177_23312165221148022.PMC9837293.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10746839","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01DOI: 10.1177/23312165231153280
Patrycja Książek, Adriana A Zekveld, Lorenz Fiedler, Sophia E Kramer, Dorothea Wendt
Daily communication may be effortful due to poor acoustic quality. In addition, memory demands can induce effort, especially for long or complex sentences. In the current study, we tested the impact of memory task demands and speech-to-noise ratio on the time-specific components of effort allocation during speech identification in noise. Thirty normally hearing adults (15 females, mean age 42.2 years) participated. In an established auditory memory test, listeners had to listen to a list of seven sentences in noise, and repeat the sentence-final word after presentation, and, if instructed, recall the repeated words. We tested the effects of speech-to-noise ratio (SNR; -4 dB, +1 dB) and recall (Recall; Yes, No), on the time-specific components of pupil responses, trial baseline pupil size, and their dynamics (change) along the list. We found three components in the pupil responses (early, middle, and late). While the additional memory task (recall versus no recall) lowered all components' values, SNR (-4 dB versus +1 dB SNR) increased the middle and late component values. Increasing memory demands (Recall) progressively increased trial baseline and steepened decrease of the late component's values. Trial baseline increased most steeply in the condition of +1 dB SNR with recall. The findings suggest that adding a recall to the auditory task alters effort allocation for listening. Listeners are dynamically re-allocating effort from listening to memorizing under changing memory and acoustic demands. The pupil baseline and the time-specific components of pupil responses provide a comprehensive picture of the interplay of SNR and recall on effort.
{"title":"Time-specific Components of Pupil Responses Reveal Alternations in Effort Allocation Caused by Memory Task Demands During Speech Identification in Noise.","authors":"Patrycja Książek, Adriana A Zekveld, Lorenz Fiedler, Sophia E Kramer, Dorothea Wendt","doi":"10.1177/23312165231153280","DOIUrl":"https://doi.org/10.1177/23312165231153280","url":null,"abstract":"<p><p>Daily communication may be effortful due to poor acoustic quality. In addition, memory demands can induce effort, especially for long or complex sentences. In the current study, we tested the impact of memory task demands and speech-to-noise ratio on the time-specific components of effort allocation during speech identification in noise. Thirty normally hearing adults (15 females, mean age 42.2 years) participated. In an established auditory memory test, listeners had to listen to a list of seven sentences in noise, and repeat the sentence-final word after presentation, and, if instructed, recall the repeated words. We tested the effects of speech-to-noise ratio (SNR; -4 dB, +1 dB) and recall (Recall; Yes, No), on the time-specific components of pupil responses, trial baseline pupil size, and their dynamics (change) along the list. We found three components in the pupil responses (early, middle, and late). While the additional memory task (recall versus no recall) lowered all components' values, SNR (-4 dB versus +1 dB SNR) increased the middle and late component values. Increasing memory demands (Recall) progressively increased trial baseline and steepened decrease of the late component's values. Trial baseline increased most steeply in the condition of +1 dB SNR with recall. The findings suggest that adding a recall to the auditory task alters effort allocation for listening. Listeners are dynamically re-allocating effort from listening to memorizing under changing memory and acoustic demands. The pupil baseline and the time-specific components of pupil responses provide a comprehensive picture of the interplay of SNR and recall on effort.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"27 ","pages":"23312165231153280"},"PeriodicalIF":2.7,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ftp.ncbi.nlm.nih.gov/pub/pmc/oa_pdf/85/b7/10.1177_23312165231153280.PMC10028670.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9514033","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Research in hearing sciences has provided extensive knowledge about how the human auditory system processes speech and assists communication. In contrast, little is known about how this system processes "natural soundscapes," that is the complex arrangements of biological and geophysical sounds shaped by sound propagation through non-anthropogenic habitats [Grinfeder et al. (2022). Frontiers in Ecology and Evolution. 10: 894232]. This is surprising given that, for many species, the capacity to process natural soundscapes determines survival and reproduction through the ability to represent and monitor the immediate environment. Here we propose a framework to encourage research programmes in the field of "human auditory ecology," focusing on the study of human auditory perception of ecological processes at work in natural habitats. Based on large acoustic databases with high ecological validity, these programmes should investigate the extent to which this presumably ancestral monitoring function of the human auditory system is adapted to specific information conveyed by natural soundscapes, whether it operate throughout the life span or whether it emerges through individual learning or cultural transmission. Beyond fundamental knowledge of human hearing, these programmes should yield a better understanding of how normal-hearing and hearing-impaired listeners monitor rural and city green and blue spaces and benefit from them, and whether rehabilitation devices (hearing aids and cochlear implants) restore natural soundscape perception and emotional responses back to normal. Importantly, they should also reveal whether and how humans hear the rapid changes in the environment brought about by human activity.
听力科学的研究提供了关于人类听觉系统如何处理语言和协助交流的广泛知识。相比之下,人们对这个系统如何处理“自然声景”知之甚少,“自然声景”是通过非人为栖息地的声音传播形成的生物和地球物理声音的复杂排列[Grinfeder et al.(2022)]。生态与进化前沿。10:894232。这是令人惊讶的,因为对于许多物种来说,处理自然声景的能力通过表现和监控周围环境的能力决定了它们的生存和繁殖。在此,我们提出了一个框架,以鼓励“人类听觉生态学”领域的研究计划,重点研究人类听觉感知在自然栖息地中工作的生态过程。基于具有高生态有效性的大型声学数据库,这些程序应该调查人类听觉系统的这种可能的祖先监测功能在多大程度上适应了自然声景所传达的特定信息,它是否贯穿整个生命周期,还是通过个人学习或文化传播出现。除了人类听力的基本知识之外,这些计划还应使人们更好地了解听力正常和听力受损的听众如何监测农村和城市的绿色和蓝色空间并从中受益,以及康复设备(助听器和人工耳蜗)是否能恢复自然的音景感知和情绪反应。重要的是,它们还应该揭示人类是否以及如何听到人类活动给环境带来的快速变化。
{"title":"Human Auditory Ecology: Extending Hearing Research to the Perception of Natural Soundscapes by Humans in Rapidly Changing Environments.","authors":"Christian Lorenzi, Frédéric Apoux, Elie Grinfeder, Bernie Krause, Nicole Miller-Viacava, Jérôme Sueur","doi":"10.1177/23312165231212032","DOIUrl":"10.1177/23312165231212032","url":null,"abstract":"<p><p>Research in hearing sciences has provided extensive knowledge about how the human auditory system processes speech and assists communication. In contrast, little is known about how this system processes \"natural soundscapes,\" that is the complex arrangements of biological and geophysical sounds shaped by sound propagation through non-anthropogenic habitats [Grinfeder et al. (2022). <i>Frontiers in Ecology and Evolution. 10:</i> 894232]. This is surprising given that, for many species, the capacity to process natural soundscapes determines survival and reproduction through the ability to represent and monitor the immediate environment. Here we propose a framework to encourage research programmes in the field of \"human auditory ecology,\" focusing on the study of human auditory perception of ecological processes at work in natural habitats. Based on large acoustic databases with high ecological validity, these programmes should investigate the extent to which this presumably ancestral monitoring function of the human auditory system is adapted to specific information conveyed by natural soundscapes, whether it operate throughout the life span or whether it emerges through individual learning or cultural transmission. Beyond fundamental knowledge of human hearing, these programmes should yield a better understanding of how normal-hearing and hearing-impaired listeners monitor rural and city green and blue spaces and benefit from them, and whether rehabilitation devices (hearing aids and cochlear implants) restore natural soundscape perception and emotional responses back to normal. Importantly, they should also reveal whether and how humans hear the rapid changes in the environment brought about by human activity.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"27 ","pages":"23312165231212032"},"PeriodicalIF":2.7,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10658775/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138048241","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01DOI: 10.1177/23312165231181757
Anastasia G Sares, Annie C Gilbert, Yue Zhang, Maria Iordanov, Alexandre Lehmann, Mickael L D Deroche
Auditory memory is an important everyday skill evaluated more and more frequently in clinical settings as there is recently a greater recognition of the cost of hearing loss to cognitive systems. Testing often involves reading a list of unrelated items aloud; but prosodic variations in pitch and timing across the list can affect the number of items remembered. Here, we ran a series of online studies on normally-hearing participants to provide normative data (with a larger and more diverse population than the typical student sample) on a novel protocol characterizing the effects of suprasegmental properties in speech, namely investigating pitch patterns, fast and slow pacing, and interactions between pitch and time grouping. In addition to free recall, and in line with our desire to work eventually with individuals exhibiting more limited cognitive capacity, we included a cued recall task to help participants recover specifically the words forgotten during the free recall part. We replicated key findings from previous research, demonstrating the benefits of slower pacing and of grouping on free recall. However, only slower pacing led to better performance on cued recall, indicating that grouping effects may decay surprisingly fast (over a matter of one minute) compared to the effect of slowed pacing. These results provide a benchmark for future comparisons of short-term recall performance in hearing-impaired listeners and users of cochlear implants.
{"title":"Grouping by Time and Pitch Facilitates Free but Not Cued Recall for Word Lists in Normally-Hearing Listeners.","authors":"Anastasia G Sares, Annie C Gilbert, Yue Zhang, Maria Iordanov, Alexandre Lehmann, Mickael L D Deroche","doi":"10.1177/23312165231181757","DOIUrl":"https://doi.org/10.1177/23312165231181757","url":null,"abstract":"<p><p>Auditory memory is an important everyday skill evaluated more and more frequently in clinical settings as there is recently a greater recognition of the cost of hearing loss to cognitive systems. Testing often involves reading a list of unrelated items aloud; but prosodic variations in pitch and timing across the list can affect the number of items remembered. Here, we ran a series of online studies on normally-hearing participants to provide normative data (with a larger and more diverse population than the typical student sample) on a novel protocol characterizing the effects of suprasegmental properties in speech, namely investigating pitch patterns, fast and slow pacing, and interactions between pitch and time grouping. In addition to free recall, and in line with our desire to work eventually with individuals exhibiting more limited cognitive capacity, we included a cued recall task to help participants recover specifically the words forgotten during the free recall part. We replicated key findings from previous research, demonstrating the benefits of slower pacing and of grouping on free recall. However, only slower pacing led to better performance on cued recall, indicating that grouping effects may decay surprisingly fast (over a matter of one minute) compared to the effect of slowed pacing. These results provide a benchmark for future comparisons of short-term recall performance in hearing-impaired listeners and users of cochlear implants.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"27 ","pages":"23312165231181757"},"PeriodicalIF":2.7,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ftp.ncbi.nlm.nih.gov/pub/pmc/oa_pdf/a6/25/10.1177_23312165231181757.PMC10286184.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9712047","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01DOI: 10.1177/23312165231201020
Peter Lokša, Norbert Kopčo
The ventriloquism aftereffect (VAE), observed as a shift in the perceived locations of sounds after audio-visual stimulation, requires reference frame (RF) alignment since hearing and vision encode space in different RFs (head-centered vs. eye-centered). Previous experimental studies reported inconsistent results, observing either a mixture of head-centered and eye-centered frames, or a predominantly head-centered frame. Here, a computational model is introduced, examining the neural mechanisms underlying these effects. The basic model version assumes that the auditory spatial map is head-centered and the visual signals are converted to head-centered frame prior to inducing the adaptation. Two mechanisms are considered as extended model versions to describe the mixed-frame experimental data: (1) additional presence of visual signals in eye-centered frame and (2) eye-gaze direction-dependent attenuation in VAE when eyes shift away from the training fixation. Simulation results show that the mixed-frame results are mainly due to the second mechanism, suggesting that the RF of VAE is mainly head-centered. Additionally, a mechanism is proposed to explain a new ventriloquism-aftereffect-like phenomenon in which adaptation is induced by aligned audio-visual signals when saccades are used for responding to auditory targets. A version of the model extended to consider such response-method-related biases accurately predicts the new phenomenon. When attempting to model all the experimentally observed phenomena simultaneously, the model predictions are qualitatively similar but less accurate, suggesting that the proposed neural mechanisms interact in a more complex way than assumed in the model.
{"title":"Toward a Unified Theory of the Reference Frame of the Ventriloquism Aftereffect.","authors":"Peter Lokša, Norbert Kopčo","doi":"10.1177/23312165231201020","DOIUrl":"10.1177/23312165231201020","url":null,"abstract":"<p><p>The ventriloquism aftereffect (VAE), observed as a shift in the perceived locations of sounds after audio-visual stimulation, requires reference frame (RF) alignment since hearing and vision encode space in different RFs (head-centered vs. eye-centered). Previous experimental studies reported inconsistent results, observing either a mixture of head-centered and eye-centered frames, or a predominantly head-centered frame. Here, a computational model is introduced, examining the neural mechanisms underlying these effects. The basic model version assumes that the auditory spatial map is head-centered and the visual signals are converted to head-centered frame prior to inducing the adaptation. Two mechanisms are considered as extended model versions to describe the mixed-frame experimental data: (1) additional presence of visual signals in eye-centered frame and (2) eye-gaze direction-dependent attenuation in VAE when eyes shift away from the training fixation. Simulation results show that the mixed-frame results are mainly due to the second mechanism, suggesting that the RF of VAE is mainly head-centered. Additionally, a mechanism is proposed to explain a new ventriloquism-aftereffect-like phenomenon in which adaptation is induced by aligned audio-visual signals when saccades are used for responding to auditory targets. A version of the model extended to consider such response-method-related biases accurately predicts the new phenomenon. When attempting to model all the experimentally observed phenomena simultaneously, the model predictions are qualitatively similar but less accurate, suggesting that the proposed neural mechanisms interact in a more complex way than assumed in the model.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"27 ","pages":"23312165231201020"},"PeriodicalIF":2.7,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ftp.ncbi.nlm.nih.gov/pub/pmc/oa_pdf/ff/13/10.1177_23312165231201020.PMC10505348.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10670951","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01DOI: 10.1177/23312165221148035
Alberte B Seeberg, Niels T Haumann, Andreas Højlund, Anne S F Andersen, Kathleen F Faulkner, Elvira Brattico, Peter Vuust, Bjørn Petersen
Cochlear implants (CIs) are optimized for speech perception but poor in conveying musical sound features such as pitch, melody, and timbre. Here, we investigated the early development of discrimination of musical sound features after cochlear implantation. Nine recently implanted CI users (CIre) were tested shortly after switch-on (T1) and approximately 3 months later (T2), using a musical multifeature mismatch negativity (MMN) paradigm, presenting four deviant features (intensity, pitch, timbre, and rhythm), and a three-alternative forced-choice behavioral test. For reference, groups of experienced CI users (CIex; n = 13) and normally hearing (NH) controls (n = 14) underwent the same tests once. We found significant improvement in CIre's neural discrimination of pitch and timbre as marked by increased MMN amplitudes. This was not reflected in the behavioral results. Behaviorally, CIre scored well above chance level at both time points for all features except intensity, but significantly below NH controls for all features except rhythm. Both CI groups scored significantly below NH in behavioral pitch discrimination. No significant difference was found in MMN amplitude between CIex and NH. The results indicate that development of musical discrimination can be detected neurophysiologically early after switch-on. However, to fully take advantage of the sparse information from the implant, a prolonged adaptation period may be required. Behavioral discrimination accuracy was notably high already shortly after implant switch-on, although well below that of NH listeners. This study provides new insight into the early development of music-discrimination abilities in CI users and may have clinical and therapeutic relevance.
{"title":"Adapting to the Sound of Music - Development of Music Discrimination Skills in Recently Implanted CI Users.","authors":"Alberte B Seeberg, Niels T Haumann, Andreas Højlund, Anne S F Andersen, Kathleen F Faulkner, Elvira Brattico, Peter Vuust, Bjørn Petersen","doi":"10.1177/23312165221148035","DOIUrl":"https://doi.org/10.1177/23312165221148035","url":null,"abstract":"Cochlear implants (CIs) are optimized for speech perception but poor in conveying musical sound features such as pitch, melody, and timbre. Here, we investigated the early development of discrimination of musical sound features after cochlear implantation. Nine recently implanted CI users (CIre) were tested shortly after switch-on (T1) and approximately 3 months later (T2), using a musical multifeature mismatch negativity (MMN) paradigm, presenting four deviant features (intensity, pitch, timbre, and rhythm), and a three-alternative forced-choice behavioral test. For reference, groups of experienced CI users (CIex; n = 13) and normally hearing (NH) controls (n = 14) underwent the same tests once. We found significant improvement in CIre's neural discrimination of pitch and timbre as marked by increased MMN amplitudes. This was not reflected in the behavioral results. Behaviorally, CIre scored well above chance level at both time points for all features except intensity, but significantly below NH controls for all features except rhythm. Both CI groups scored significantly below NH in behavioral pitch discrimination. No significant difference was found in MMN amplitude between CIex and NH. The results indicate that development of musical discrimination can be detected neurophysiologically early after switch-on. However, to fully take advantage of the sparse information from the implant, a prolonged adaptation period may be required. Behavioral discrimination accuracy was notably high already shortly after implant switch-on, although well below that of NH listeners. This study provides new insight into the early development of music-discrimination abilities in CI users and may have clinical and therapeutic relevance.","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"27 ","pages":"23312165221148035"},"PeriodicalIF":2.7,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ftp.ncbi.nlm.nih.gov/pub/pmc/oa_pdf/0a/8a/10.1177_23312165221148035.PMC9830578.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10750139","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01DOI: 10.1177/23312165231157255
Megan Knoetze, Vinaya Manchaiah, Bopane Mothemela, De Wet Swanepoel
This systematic review examined the audiological and nonaudiological factors that influence hearing help-seeking and hearing aid uptake in adults with hearing loss based on the literature published during the last decade. Peer-reviewed articles published between January 2011 and February 2022 were identified through systematic searches in electronic databases CINAHL, PsycINFO, and MEDLINE. The review was conducted and reported according to the PRISMA protocol. Forty-two articles met the inclusion criteria. Seventy (42 audiological and 28 nonaudiological) hearing help-seeking factors and 159 (93 audiological and 66 nonaudiological) hearing aid uptake factors were investigated with many factors reported only once (10/70 and 62/159, respectively). Hearing aid uptake had some strong predictors (e.g., hearing sensitivity) with others showing conflicting results (e.g., self-reported health). Hearing help-seeking had clear nonpredictive factors (e.g., education) and conflicting factors (e.g., self-reported health). New factors included cognitive anxiety associated with increased help-seeking and hearing aid uptake and urban residency and access to financial support with hearing aid uptake. Most studies were rated as having a low level of evidence (67%) and fair quality (86%). Effective promotion of hearing help-seeking requires more research evidence. Investigating factors with conflicting results and limited evidence is important to clarify what factors support help-seeking and hearing aid uptake in adults with hearing loss. These findings can inform future research and hearing health promotion and rehabilitation practices.
{"title":"Factors Influencing Hearing Help-Seeking and Hearing Aid Uptake in Adults: A Systematic Review of the Past Decade.","authors":"Megan Knoetze, Vinaya Manchaiah, Bopane Mothemela, De Wet Swanepoel","doi":"10.1177/23312165231157255","DOIUrl":"https://doi.org/10.1177/23312165231157255","url":null,"abstract":"<p><p>This systematic review examined the audiological and nonaudiological factors that influence hearing help-seeking and hearing aid uptake in adults with hearing loss based on the literature published during the last decade. Peer-reviewed articles published between January 2011 and February 2022 were identified through systematic searches in electronic databases CINAHL, PsycINFO, and MEDLINE. The review was conducted and reported according to the PRISMA protocol. Forty-two articles met the inclusion criteria. Seventy (42 audiological and 28 nonaudiological) hearing help-seeking factors and 159 (93 audiological and 66 nonaudiological) hearing aid uptake factors were investigated with many factors reported only once (10/70 and 62/159, respectively). Hearing aid uptake had some strong predictors (e.g., hearing sensitivity) with others showing conflicting results (e.g., self-reported health). Hearing help-seeking had clear nonpredictive factors (e.g., education) and conflicting factors (e.g., self-reported health). New factors included cognitive anxiety associated with increased help-seeking and hearing aid uptake and urban residency and access to financial support with hearing aid uptake. Most studies were rated as having a low level of evidence (67%) and fair quality (86%). Effective promotion of hearing help-seeking requires more research evidence. Investigating factors with conflicting results and limited evidence is important to clarify what factors support help-seeking and hearing aid uptake in adults with hearing loss. These findings can inform future research and hearing health promotion and rehabilitation practices.</p>","PeriodicalId":48678,"journal":{"name":"Trends in Hearing","volume":"27 ","pages":"23312165231157255"},"PeriodicalIF":2.7,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ftp.ncbi.nlm.nih.gov/pub/pmc/oa_pdf/5a/a2/10.1177_23312165231157255.PMC9940236.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10752961","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}