Pub Date : 2025-01-02Epub Date: 2024-12-16DOI: 10.1044/2024_JSLHR-24-00312
Kristen Bottema-Beutel, Ruoxi Guo, Caroline Braun, Kacie Dunham-Carr, Jennifer E Markfeld, Grace Pulliam, S Madison Clark, Bahar Keçeli-Kaysılı, Jacob I Feldman, Tiffany Woynaroski
Purpose: This study aims to help researchers design observational measurement systems that yield sufficiently stable scores for estimating caregiver talk among caregivers of infant siblings of autistic and non-autistic children. Stable estimates minimize error introduced by facets of the measurement system, such as variability between coders or measurement sessions.
Method: Analyses of variance were used to partition error variance between coder and session and to derive g coefficients. Decision studies determined the number of sessions and coders over which scores must be averaged to achieve sufficiently stable g coefficients (0.80). Twelve infants at elevated likelihood of an autism diagnosis and 12 infants with population-level likelihood of autism diagnosis participated in two semistructured observation sessions when the children were 12-18 months of age and again 9 months later. Caregiver follow-in talk was coded from these sessions.
Results: Two sessions and one coder were needed to achieve sufficient stability for follow-in talk and follow-in comments for both groups of infants at both time points. However, follow-in directives did not reach sufficient stability for any combination of sessions or coders for the population-level likelihood group at either time point, or for the elevated likelihood group at Time 2.
Conclusion: Researchers should plan to collect at least two sessions to derive sufficiently stable estimates of caregiver talk in infants at elevated and general population-level likelihood for autism.
目的:本研究旨在帮助研究人员设计观察测量系统,该系统可产生足够稳定的分数,用于估计自闭症儿童和非自闭症儿童的婴儿兄弟姐妹的照顾者之间的谈话情况。稳定的估计值可最大限度地减少测量系统各方面带来的误差,如编码员之间或测量环节之间的差异:方法:使用方差分析来划分编码者和测量环节之间的误差方差,并得出 g 系数。决策研究确定了为获得足够稳定的 g 系数 (0.80),必须对多少次测量和编码员的评分进行平均。12 名可能被诊断为自闭症的婴儿和 12 名可能被诊断为自闭症的婴儿在 12-18 个月大时参加了两次半结构式观察,9 个月后再次参加。从这些观察中对照顾者的跟进谈话进行编码:两组婴儿在两个时间点的跟进谈话和跟进评论都需要两次观察和一名编码员才能达到足够的稳定性。然而,在任何一个时间点,对于人群水平可能性组,或在时间 2 对于高可能性组,任何环节或编码员组合的跟进指令都没有达到足够的稳定性:研究人员应计划收集至少两个时段的数据,以便对自闭症高发婴儿和一般人群自闭症高发婴儿的照料者谈话进行充分稳定的估计。补充材料:https://doi.org/10.23641/asha.27996875。
{"title":"Considerations for Measuring Caregiver Talk in Interactions With Infants at Elevated and Population-Level Likelihood for Autism: Deriving Stable Estimates.","authors":"Kristen Bottema-Beutel, Ruoxi Guo, Caroline Braun, Kacie Dunham-Carr, Jennifer E Markfeld, Grace Pulliam, S Madison Clark, Bahar Keçeli-Kaysılı, Jacob I Feldman, Tiffany Woynaroski","doi":"10.1044/2024_JSLHR-24-00312","DOIUrl":"10.1044/2024_JSLHR-24-00312","url":null,"abstract":"<p><strong>Purpose: </strong>This study aims to help researchers design observational measurement systems that yield sufficiently stable scores for estimating caregiver talk among caregivers of infant siblings of autistic and non-autistic children. Stable estimates minimize error introduced by facets of the measurement system, such as variability between coders or measurement sessions.</p><p><strong>Method: </strong>Analyses of variance were used to partition error variance between coder and session and to derive <i>g</i> coefficients. Decision studies determined the number of sessions and coders over which scores must be averaged to achieve sufficiently stable <i>g</i> coefficients (0.80). Twelve infants at elevated likelihood of an autism diagnosis and 12 infants with population-level likelihood of autism diagnosis participated in two semistructured observation sessions when the children were 12-18 months of age and again 9 months later. Caregiver follow-in talk was coded from these sessions.</p><p><strong>Results: </strong>Two sessions and one coder were needed to achieve sufficient stability for follow-in talk and follow-in comments for both groups of infants at both time points. However, follow-in directives did not reach sufficient stability for any combination of sessions or coders for the population-level likelihood group at either time point, or for the elevated likelihood group at Time 2.</p><p><strong>Conclusion: </strong>Researchers should plan to collect at least two sessions to derive sufficiently stable estimates of caregiver talk in infants at elevated and general population-level likelihood for autism.</p><p><strong>Supplemental material: </strong>https://doi.org/10.23641/asha.27996875.</p>","PeriodicalId":51254,"journal":{"name":"Journal of Speech Language and Hearing Research","volume":" ","pages":"234-247"},"PeriodicalIF":2.2,"publicationDate":"2025-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142840218","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-02Epub Date: 2024-12-09DOI: 10.1044/2024_JSLHR-24-00197
Kathryn B Wiseman, Tiana M Cowan, Lauren Calandruccio, Elizabeth A Walker, Barbara Rodriguez, Jacob J Oleson, Ryan W McCreery, Lori J Leibold, Emily Buss
Purpose: This report compares device use in a cohort of Spanish-English bilingual and English monolingual children who are deaf and hard of hearing, including children fitted with traditional hearing aids, cochlear implants (CIs), and/or bone-conduction hearing devices.
Method: Participants were 84 Spanish-English bilingual children and 85 English monolingual children from clinical sites across the United States. The data represent a subset obtained in a larger clinical trial. Device use obtained via data logging was modeled as a function of language group, device type, child age, sex, and parental education.
Results: Among children with traditional hearing aids, bilingual children wore their devices significantly fewer hours per day than monolingual children, but this group difference was not observed for children with CIs or bone-conduction hearing devices. In the monolingual group, older children wore their devices significantly more hours than younger children, but this effect of age was not present in the bilingual group. Parent report was consistent with data logging for bilingual and monolingual children.
Conclusions: Spanish-English bilingual hearing aid users wore their devices less than their English monolingual peers, particularly among older children. This group effect was not observed for children with CIs or bone-conduction hearing devices. Additional studies are needed to identify factors that contribute to device use among bilingual children with hearing aids.
{"title":"Device Use Among Spanish-English Bilingual and English Monolingual Children Who Are Deaf and Hard of Hearing.","authors":"Kathryn B Wiseman, Tiana M Cowan, Lauren Calandruccio, Elizabeth A Walker, Barbara Rodriguez, Jacob J Oleson, Ryan W McCreery, Lori J Leibold, Emily Buss","doi":"10.1044/2024_JSLHR-24-00197","DOIUrl":"10.1044/2024_JSLHR-24-00197","url":null,"abstract":"<p><strong>Purpose: </strong>This report compares device use in a cohort of Spanish-English bilingual and English monolingual children who are deaf and hard of hearing, including children fitted with traditional hearing aids, cochlear implants (CIs), and/or bone-conduction hearing devices.</p><p><strong>Method: </strong>Participants were 84 Spanish-English bilingual children and 85 English monolingual children from clinical sites across the United States. The data represent a subset obtained in a larger clinical trial. Device use obtained via data logging was modeled as a function of language group, device type, child age, sex, and parental education.</p><p><strong>Results: </strong>Among children with traditional hearing aids, bilingual children wore their devices significantly fewer hours per day than monolingual children, but this group difference was not observed for children with CIs or bone-conduction hearing devices. In the monolingual group, older children wore their devices significantly more hours than younger children, but this effect of age was not present in the bilingual group. Parent report was consistent with data logging for bilingual and monolingual children.</p><p><strong>Conclusions: </strong>Spanish-English bilingual hearing aid users wore their devices less than their English monolingual peers, particularly among older children. This group effect was not observed for children with CIs or bone-conduction hearing devices. Additional studies are needed to identify factors that contribute to device use among bilingual children with hearing aids.</p>","PeriodicalId":51254,"journal":{"name":"Journal of Speech Language and Hearing Research","volume":" ","pages":"282-300"},"PeriodicalIF":2.2,"publicationDate":"2025-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142803033","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-02Epub Date: 2024-12-12DOI: 10.1044/2024_JSLHR-24-00085
Adnan Shehabi, Christopher J Plack, Garreth Prendergast, Kevin J Munro, Michael A Stone, Joseph Laycock, Arwa AlJasser, Hannah Guest
Purpose: The Digits-in-Noise (DIN) test is used widely in research and, increasingly, in remote hearing screening. The reported study aimed to provide basic evaluation data for browser-based DIN software, which allows remote testing without installation of an app. It investigated the effects of test language (Arabic vs. English) and test environment (lab vs. home) on DIN thresholds and test-retest reliability. It also examined the effects of test language on the correlations between DIN and audiometric thresholds.
Method: Fifty-two bilingual adults with normal hearing aged 18-35 years completed Arabic and English diotic DIN tests (two sessions in the lab and two sessions at home via the web). Effects of language and environment on DIN thresholds were assessed via paired t tests, while intraclass and Pearson's/Spearman's correlation coefficients quantified test-retest reliability and relations to audiometric thresholds.
Results: DIN thresholds were 0.74 dB higher (worse) for Arabic than English stimuli. Thresholds were 0.52 dB lower in the lab than at home, but the effect was not significant after correction for multiple comparisons. Intraclass and Pearson's correlation coefficients were too low for meaningful analysis due to the use of a normal-hearing sample with low between-subject variability in DIN and audiometric thresholds. However, exploratory analysis showed that absolute test-retest differences were low (< 1.2 dB, on average) for both languages and both test environments.
Conclusions: Arabic DIN thresholds were a little higher than English thresholds for the same listeners. Employing home-based rather than lab-based testing may slightly elevate DIN thresholds, but the effect was marginal. Nonetheless, both factors should be considered when interpreting DIN data. Test-retest differences were low for both languages and environments. To support hearing screening, subsequent research in audiometrically diverse listeners is required, testing the reliability of DIN thresholds and relations to hearing loss.
目的:噪声中数字(DIN)测试被广泛用于研究,并越来越多地用于远程听力筛查。本研究旨在为基于浏览器的 DIN 软件提供基本评估数据,该软件无需安装应用程序即可进行远程测试。研究调查了测试语言(阿拉伯语与英语)和测试环境(实验室与家庭)对 DIN 阈值和重复测试可靠性的影响。研究还考察了测试语言对 DIN 与听阈之间相关性的影响:52名听力正常、年龄在18-35岁之间的双语成人完成了阿拉伯语和英语的DIN测试(两次在实验室进行,两次在家中通过网络进行)。语言和环境对 DIN阈值的影响通过配对 t 检验进行评估,而类内相关系数和皮尔逊/斯皮尔曼相关系数则量化了测试-重复测试的可靠性以及与听力阈值的关系:阿拉伯语刺激的 DIN 阈值比英语刺激的 DIN 阈值高(差)0.74 dB。实验室阈值比家中阈值低 0.52 分贝,但经多重比较校正后,影响并不显著。由于使用的是听力正常的样本,DIN 和听阈的受试者间变异性较低,因此类内相关系数和皮尔逊相关系数太低,无法进行有意义的分析。然而,探索性分析表明,两种语言和两种测试环境的测试-复测绝对差异都很低(平均< 1.2 dB):结论:对于相同的听者,阿拉伯语的 DIN阈值略高于英语阈值。采用家庭测试而非实验室测试可能会略微提高 DIN 门限,但影响不大。不过,在解释 DIN 数据时应考虑这两个因素。两种语言和环境的测试重复差异都很低。为了支持听力筛查,需要对不同听力水平的听者进行后续研究,测试 DIN 阈值的可靠性以及与听力损失的关系。
{"title":"Online Arabic and English Digits-in-Noise Tests: Effects of Test Language and At-Home Testing.","authors":"Adnan Shehabi, Christopher J Plack, Garreth Prendergast, Kevin J Munro, Michael A Stone, Joseph Laycock, Arwa AlJasser, Hannah Guest","doi":"10.1044/2024_JSLHR-24-00085","DOIUrl":"10.1044/2024_JSLHR-24-00085","url":null,"abstract":"<p><strong>Purpose: </strong>The Digits-in-Noise (DIN) test is used widely in research and, increasingly, in remote hearing screening. The reported study aimed to provide basic evaluation data for browser-based DIN software, which allows remote testing without installation of an app. It investigated the effects of test language (Arabic vs. English) and test environment (lab vs. home) on DIN thresholds and test-retest reliability. It also examined the effects of test language on the correlations between DIN and audiometric thresholds.</p><p><strong>Method: </strong>Fifty-two bilingual adults with normal hearing aged 18-35 years completed Arabic and English diotic DIN tests (two sessions in the lab and two sessions at home via the web). Effects of language and environment on DIN thresholds were assessed via paired <i>t</i> tests, while intraclass and Pearson's/Spearman's correlation coefficients quantified test-retest reliability and relations to audiometric thresholds.</p><p><strong>Results: </strong>DIN thresholds were 0.74 dB higher (worse) for Arabic than English stimuli. Thresholds were 0.52 dB lower in the lab than at home, but the effect was not significant after correction for multiple comparisons. Intraclass and Pearson's correlation coefficients were too low for meaningful analysis due to the use of a normal-hearing sample with low between-subject variability in DIN and audiometric thresholds. However, exploratory analysis showed that absolute test-retest differences were low (< 1.2 dB, on average) for both languages and both test environments.</p><p><strong>Conclusions: </strong>Arabic DIN thresholds were a little higher than English thresholds for the same listeners. Employing home-based rather than lab-based testing may slightly elevate DIN thresholds, but the effect was marginal. Nonetheless, both factors should be considered when interpreting DIN data. Test-retest differences were low for both languages and environments. To support hearing screening, subsequent research in audiometrically diverse listeners is required, testing the reliability of DIN thresholds and relations to hearing loss.</p>","PeriodicalId":51254,"journal":{"name":"Journal of Speech Language and Hearing Research","volume":" ","pages":"388-398"},"PeriodicalIF":2.2,"publicationDate":"2025-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142820066","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Purpose: The aim of this study was to determine whether amplification of key words in discourse helped to memorize the words.
Method: We tested the effect of 135 participants' memory for key words in the discourse after intensity amplification (0, 5, 7, 9, and 11 dB), and we also tested physiological indicators to measure attention levels in another 30 participants. Adobe Audition was used to modulate the intensity of key words, whereas E-prime technology was used to present speech stimuli and test the accuracy of the memory of the participants.
Results: The results showed that amplifying key word intensity by 9 dB led to a significant enhancement in memory, whereas there was no difference in self-reported naturalness between amplification of key word intensity in the 9 dB and nonamplified groups. Heart rate and skin conductance level of the subjects decreased with amplification of key word intensity in the 9-dB group, which indicated that this promoted the memory effect by enhancing attention.
Conclusions: Our results demonstrate that amplifying the intensity of the key words by 9 dB is an effective strategy for promoting memory. This research provides a theoretical basis for optimizing the acoustic parameters of audio learning materials to achieve better teaching effects.
{"title":"Amplifying Sound Intensity of Key Words in Discourse Promotes Memory in Female College Students.","authors":"Zhenxu Liu, Yajie He, Wenhao Li, Sixing Cui, Ziying Fu, Xin Wang","doi":"10.1044/2024_JSLHR-24-00386","DOIUrl":"10.1044/2024_JSLHR-24-00386","url":null,"abstract":"<p><strong>Purpose: </strong>The aim of this study was to determine whether amplification of key words in discourse helped to memorize the words.</p><p><strong>Method: </strong>We tested the effect of 135 participants' memory for key words in the discourse after intensity amplification (0, 5, 7, 9, and 11 dB), and we also tested physiological indicators to measure attention levels in another 30 participants. Adobe Audition was used to modulate the intensity of key words, whereas E-prime technology was used to present speech stimuli and test the accuracy of the memory of the participants.</p><p><strong>Results: </strong>The results showed that amplifying key word intensity by 9 dB led to a significant enhancement in memory, whereas there was no difference in self-reported naturalness between amplification of key word intensity in the 9 dB and nonamplified groups. Heart rate and skin conductance level of the subjects decreased with amplification of key word intensity in the 9-dB group, which indicated that this promoted the memory effect by enhancing attention.</p><p><strong>Conclusions: </strong>Our results demonstrate that amplifying the intensity of the key words by 9 dB is an effective strategy for promoting memory. This research provides a theoretical basis for optimizing the acoustic parameters of audio learning materials to achieve better teaching effects.</p><p><strong>Supplemental material: </strong>https://doi.org/10.23641/asha.27902643.</p>","PeriodicalId":51254,"journal":{"name":"Journal of Speech Language and Hearing Research","volume":" ","pages":"16-25"},"PeriodicalIF":2.2,"publicationDate":"2025-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142786895","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-02Epub Date: 2024-12-05DOI: 10.1044/2024_JSLHR-23-00820
Chieh Kao, Yang Zhang
Purpose: This study aimed to investigate infants' neural responses to changes in emotional prosody in spoken words. The focus was on understanding developmental changes and potential sex differences, aspects that were not consistently observed in previous behavioral studies.
Method: A modified multifeature oddball paradigm was used with emotional deviants (angry, happy, and sad) presented against neutral prosody (standard) within varying spoken words during a single electroencephalography recording session. The reported data included 34 infants (18 males, 16 females; age range: 3-12 months, average age: 7 months 26 days).
Results: Infants exhibited distinct patterns of mismatch responses (MMRs) to different emotional prosodies in both early (100-200 ms) and late (300-500 ms) time windows following the speech onset. While both happy and angry prosodies elicited more negative early MMRs than the sad prosody across all infants, older infants showed more negative early MMRs than their younger counterparts. The distinction between early MMRs to angry and sad prosodies was more pronounced in younger infants. In the late time window, angry prosody elicited a more negative late MMR than the sad prosody, with younger infants showing more distinct late MMRs to sad and angry prosodies compared to older infants. Additionally, a sex effect was observed as male infants displayed more negative early MMRs compared to females.
Conclusions: These findings demonstrate the feasibility of the modified multifeature oddball protocol in studying neural sensitivities to emotional speech in infancy. The observed age and sex effects on infants' auditory neural responses to vocal emotions underscore the need for further research to distinguish between acoustic and emotional processing and to understand their roles in early socioemotional and language development.
{"title":"Age and Sex Differences in Infants' Neural Sensitivity to Emotional Prosodies in Spoken Words: A Multifeature Oddball Study.","authors":"Chieh Kao, Yang Zhang","doi":"10.1044/2024_JSLHR-23-00820","DOIUrl":"10.1044/2024_JSLHR-23-00820","url":null,"abstract":"<p><strong>Purpose: </strong>This study aimed to investigate infants' neural responses to changes in emotional prosody in spoken words. The focus was on understanding developmental changes and potential sex differences, aspects that were not consistently observed in previous behavioral studies.</p><p><strong>Method: </strong>A modified multifeature oddball paradigm was used with emotional deviants (angry, happy, and sad) presented against neutral prosody (standard) within varying spoken words during a single electroencephalography recording session. The reported data included 34 infants (18 males, 16 females; age range: 3-12 months, average age: 7 months 26 days).</p><p><strong>Results: </strong>Infants exhibited distinct patterns of mismatch responses (MMRs) to different emotional prosodies in both early (100-200 ms) and late (300-500 ms) time windows following the speech onset. While both happy and angry prosodies elicited more negative early MMRs than the sad prosody across all infants, older infants showed more negative early MMRs than their younger counterparts. The distinction between early MMRs to angry and sad prosodies was more pronounced in younger infants. In the late time window, angry prosody elicited a more negative late MMR than the sad prosody, with younger infants showing more distinct late MMRs to sad and angry prosodies compared to older infants. Additionally, a sex effect was observed as male infants displayed more negative early MMRs compared to females.</p><p><strong>Conclusions: </strong>These findings demonstrate the feasibility of the modified multifeature oddball protocol in studying neural sensitivities to emotional speech in infancy. The observed age and sex effects on infants' auditory neural responses to vocal emotions underscore the need for further research to distinguish between acoustic and emotional processing and to understand their roles in early socioemotional and language development.</p><p><strong>Supplemental material: </strong>https://doi.org/10.23641/asha.27914553.</p>","PeriodicalId":51254,"journal":{"name":"Journal of Speech Language and Hearing Research","volume":" ","pages":"332-348"},"PeriodicalIF":2.2,"publicationDate":"2025-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142786901","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-02Epub Date: 2024-11-19DOI: 10.1044/2024_JSLHR-23-00678
Michael F Dorman, Sarah C Natale, Nadine Buczak, Josh Stohl, Francesco Acciai, Andreas Büchner
Purpose: The aims of this exploratory study were (a) to assess common terms used to describe cochlear implant (CI) sound quality by patients fit with conventional CIs and (b) to compare those descriptors to previously obtained acoustic matches to CI sound quality created by single-sided deaf (SSD) patients for their normal-hearing ear.
Method: CI patients fit with Advanced Bionics (AB; n = 89), Cochlear Corporation (n = 86), and MED-EL (n = 80) implants were the participants. The patients filled out a questionnaire about CI sound quality for two time points: For the time near activation (T1) from memory and at the time of filling out the questionnaire (T2). The mean CI experience at T2 for the three groups ranged from 4 to 8 years. The questionnaire was composed of 25 adjectives describing sound quality.
Results: For T1, the most commonly used descriptors were Computer-like, Treble-y, Metallic, and Mickey Mouse-like. A superordinate category of HiPitched (High Pitched) gathered significantly more responses from patients with shorter electrode arrays (AB and Cochlear) than patients with longer arrays (MED-EL). At T2, the most common descriptor was Clear and was chosen by approximately two thirds of the patients. The between-group differences in responses to items in the HiPitched category, present at T1, were absent at T2.
Conclusions: The questionnaire data from conventional CI patients differs from previous sound matching data collected from SSD-CI patients. Alterations to the spectral composition of the signal are less salient to experienced conventional patients than to experienced SSD-CI patients. This is likely due to the absence, for conventional patients, of an exemplar in an NH ear against which to judge CI sound quality.
目的:本探索性研究的目的是:(a) 评估佩戴传统人工耳蜗的患者描述人工耳蜗(CI)音质的常用术语;(b) 将这些描述术语与之前获得的单侧耳聋(SSD)患者为其正常听力耳朵创建的 CI 音质声学匹配进行比较:方法:参与者包括植入先进仿生公司(AB;n = 89)、科利耳公司(n = 86)和 MED-EL 公司(n = 80)植入体的 CI 患者。患者在两个时间点填写了有关 CI 音质的问卷:在记忆中接近激活时(T1)和填写问卷时(T2)。三组患者在 T2 阶段的平均 CI 使用年限为 4 至 8 年不等。问卷由 25 个描述音质的形容词组成:在 T1,最常用的描述词是电脑音质、高音音质、金属音质和米老鼠音质。较短电极阵列(AB 和耳蜗)的患者对 HiPitched(高音调)这一上位词的回答明显多于较长电极阵列(MED-EL)的患者。在 T2 阶段,最常见的描述词是 "清晰",约有三分之二的患者选择了这一描述词。对 "HiPitched "类项目的回答在 T1 存在组间差异,但在 T2 则不存在:传统 CI 患者的问卷数据与之前从 SSD-CI 患者收集的声音匹配数据有所不同。与有经验的 SSD-CI 患者相比,有经验的传统 CI 患者对信号频谱组成的改变不那么敏感。这很可能是由于传统患者缺乏可用于判断 CI 音质的 NH 耳范例。
{"title":"Cochlear Implant Sound Quality.","authors":"Michael F Dorman, Sarah C Natale, Nadine Buczak, Josh Stohl, Francesco Acciai, Andreas Büchner","doi":"10.1044/2024_JSLHR-23-00678","DOIUrl":"10.1044/2024_JSLHR-23-00678","url":null,"abstract":"<p><strong>Purpose: </strong>The aims of this exploratory study were (a) to assess common terms used to describe cochlear implant (CI) sound quality by patients fit with conventional CIs and (b) to compare those descriptors to previously obtained acoustic matches to CI sound quality created by single-sided deaf (SSD) patients for their normal-hearing ear.</p><p><strong>Method: </strong>CI patients fit with Advanced Bionics (AB; <i>n</i> = 89), Cochlear Corporation (<i>n</i> = 86), and MED-EL (<i>n</i> = 80) implants were the participants. The patients filled out a questionnaire about CI sound quality for two time points: For the time near activation (T1) from memory and at the time of filling out the questionnaire (T2). The mean CI experience at T2 for the three groups ranged from 4 to 8 years. The questionnaire was composed of 25 adjectives describing sound quality.</p><p><strong>Results: </strong>For T1, the most commonly used descriptors were Computer-like, Treble-y, Metallic, and Mickey Mouse-like. A superordinate category of HiPitched (High Pitched) gathered significantly more responses from patients with shorter electrode arrays (AB and Cochlear) than patients with longer arrays (MED-EL). At T2, the most common descriptor was Clear and was chosen by approximately two thirds of the patients. The between-group differences in responses to items in the HiPitched category, present at T1, were absent at T2.</p><p><strong>Conclusions: </strong>The questionnaire data from conventional CI patients differs from previous sound matching data collected from SSD-CI patients. Alterations to the spectral composition of the signal are less salient to experienced conventional patients than to experienced SSD-CI patients. This is likely due to the absence, for conventional patients, of an exemplar in an NH ear against which to judge CI sound quality.</p>","PeriodicalId":51254,"journal":{"name":"Journal of Speech Language and Hearing Research","volume":" ","pages":"323-331"},"PeriodicalIF":2.2,"publicationDate":"2025-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142670003","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-02Epub Date: 2024-12-17DOI: 10.1044/2024_JSLHR-24-00292
Cassandra Alighieri, Camille De Coster, Kim Bettens, Valerie Pereira
Purpose: This study compared the occurrence of different types of generalization (within-class, across-class, and total generalization) following motor-phonetic speech therapy and linguistic-phonological speech therapy in children with a cleft palate ± cleft lip (CP ± L).
Method: Thirteen children with a CP ± L (Mage = 7.50 years) who previously participated in a block-randomized, sham-controlled design comparing motor-phonetic therapy (n = 7) and linguistic-phonological therapy (n = 6) participated in this study. Speech samples consisting of word imitation and sentence imitation were collected on different data points before and after therapy and perceptually assessed using the Dutch translation of the Cleft Audit Protocol for Speech-Augmented. The percentages within-class, across-class, and total generalization were calculated for the different target consonants. Generalization in the two groups was compared over time using linear mixed models (LMMs).
Results: LMM revealed significant Time × Group interactions for the percentage within-class generalization in sentence imitation and total generalization in sentence imitation tasks indicating that these percentages were significantly higher in the group of children who received linguistic-phonological intervention. No Time × Group interactions were found for the percentages across-class generalization.
Conclusions: Generalization can occur following both motor-phonetic intervention as well as linguistic-phonological intervention. A linguistic-phonological approach, however, was observed to result in larger percentages of within-class and total generalization scores. As children with a CP ± L often receive yearlong intervention to eliminate cleft-related speech sound errors, these findings on the superior generalization effects of linguistic-phonological intervention are important to consider in clinical practice.
{"title":"Does Generalization Occur Following Speech Therapy? A Study in Children With a Cleft Palate.","authors":"Cassandra Alighieri, Camille De Coster, Kim Bettens, Valerie Pereira","doi":"10.1044/2024_JSLHR-24-00292","DOIUrl":"10.1044/2024_JSLHR-24-00292","url":null,"abstract":"<p><strong>Purpose: </strong>This study compared the occurrence of different types of generalization (within-class, across-class, and total generalization) following motor-phonetic speech therapy and linguistic-phonological speech therapy in children with a cleft palate ± cleft lip (CP ± L).</p><p><strong>Method: </strong>Thirteen children with a CP ± L (<i>M</i><sub>age</sub> = 7.50 years) who previously participated in a block-randomized, sham-controlled design comparing motor-phonetic therapy (<i>n</i> = 7) and linguistic-phonological therapy (<i>n</i> = 6) participated in this study. Speech samples consisting of word imitation and sentence imitation were collected on different data points before and after therapy and perceptually assessed using the Dutch translation of the Cleft Audit Protocol for Speech-Augmented. The percentages within-class, across-class, and total generalization were calculated for the different target consonants. Generalization in the two groups was compared over time using linear mixed models (LMMs).</p><p><strong>Results: </strong>LMM revealed significant Time × Group interactions for the percentage within-class generalization in sentence imitation and total generalization in sentence imitation tasks indicating that these percentages were significantly higher in the group of children who received linguistic-phonological intervention. No Time × Group interactions were found for the percentages across-class generalization.</p><p><strong>Conclusions: </strong>Generalization can occur following both motor-phonetic intervention as well as linguistic-phonological intervention. A linguistic-phonological approach, however, was observed to result in larger percentages of within-class and total generalization scores. As children with a CP ± L often receive yearlong intervention to eliminate cleft-related speech sound errors, these findings on the superior generalization effects of linguistic-phonological intervention are important to consider in clinical practice.</p>","PeriodicalId":51254,"journal":{"name":"Journal of Speech Language and Hearing Research","volume":" ","pages":"91-104"},"PeriodicalIF":2.2,"publicationDate":"2025-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142848142","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Purpose: The Language ENvironment Analysis (LENA) technology uses automated speech processing (ASP) algorithms to estimate counts such as total adult words and child vocalizations, which helps understand children's early language environment. This ASP has been validated in North American English and other languages in predominantly monolingual contexts but not in a multilingual context like India. Thus, the current study aims to validate the classification accuracy of the LENA algorithm specifically focusing on speaker recognition of adult segments (AdS) and child segments (ChS) in a sample of bi/multilingual families from India.
Method: Thirty neurotypical children between 6 and 24 months (M = 12.89, SD = 4.95) were recruited. Participants were growing up in bi/multilingual environment hearing a combination of Kannada, Tamil, Malayalam, Telugu, Hindi, and/or English. Daylong audio recordings were collected using LENA and processed using the ASP to automatically detect segments across speaker categories. Two human annotators manually annotated ~900 min (37,431 segments across speaker categories). Performance accuracy (recall and precision) was calculated for AdS and ChS.
Results: The recall and precision for AdS were 0.62 (95% confidence interval [CI] [0.61, 0.63]) and 0.83 (95% CI [0.8, 0.83]), respectively. This indicated that 62% of the segments identified as AdS by the human annotator were also identified as AdS by the LENA ASP algorithm and 83% of the segments labeled by the LENA ASP as AdS were also labeled by the human annotator as AdS. Similarly, the recall and precision for ChS were 0.65 (95% CI [0.64, 0.66]) and 0.55 (95% CI [0.54, 0.56]), respectively.
Conclusions: This study documents the performance of the ASP in correctly classifying speakers as adult or child in a sample of families from India, indicating recall and precision that is relatively low. This study lays the groundwork for future investigations aiming to refine the algorithm models, potentially facilitating more accurate performance in bi/multilingual societies like India.
{"title":"Validation of the Language ENvironment Analysis (LENA) Automated Speech Processing Algorithm Labels for Adult and Child Segments in a Sample of Families From India.","authors":"Shoba S Meera, Divya Swaminathan, Sri Ranjani Venkata Murali, Reny Raju, Malavi Srikar, Sahana Shyam Sundar, Senthil Amudhan, Alejandrina Cristia, Rahul Pawar, Achuth Rao, Prathyusha P Vasuki, Shree Volme, Ashok Mysore","doi":"10.1044/2024_JSLHR-24-00099","DOIUrl":"10.1044/2024_JSLHR-24-00099","url":null,"abstract":"<p><strong>Purpose: </strong>The Language ENvironment Analysis (LENA) technology uses automated speech processing (ASP) algorithms to estimate counts such as total adult words and child vocalizations, which helps understand children's early language environment. This ASP has been validated in North American English and other languages in predominantly monolingual contexts but not in a multilingual context like India. Thus, the current study aims to validate the classification accuracy of the LENA algorithm specifically focusing on speaker recognition of adult segments (AdS) and child segments (ChS) in a sample of bi/multilingual families from India.</p><p><strong>Method: </strong>Thirty neurotypical children between 6 and 24 months (<i>M</i> = 12.89, <i>SD</i> = 4.95) were recruited. Participants were growing up in bi/multilingual environment hearing a combination of Kannada, Tamil, Malayalam, Telugu, Hindi, and/or English. Daylong audio recordings were collected using LENA and processed using the ASP to automatically detect segments across speaker categories. Two human annotators manually annotated ~900 min (37,431 segments across speaker categories). Performance accuracy (recall and precision) was calculated for AdS and ChS.</p><p><strong>Results: </strong>The recall and precision for AdS were 0.62 (95% confidence interval [CI] [0.61, 0.63]) and 0.83 (95% CI [0.8, 0.83]), respectively. This indicated that 62% of the segments identified as AdS by the human annotator were also identified as AdS by the LENA ASP algorithm and 83% of the segments labeled by the LENA ASP as AdS were also labeled by the human annotator as AdS. Similarly, the recall and precision for ChS were 0.65 (95% CI [0.64, 0.66]) and 0.55 (95% CI [0.54, 0.56]), respectively.</p><p><strong>Conclusions: </strong>This study documents the performance of the ASP in correctly classifying speakers as adult or child in a sample of families from India, indicating recall and precision that is relatively low. This study lays the groundwork for future investigations aiming to refine the algorithm models, potentially facilitating more accurate performance in bi/multilingual societies like India.</p><p><strong>Supplemental material: </strong>https://doi.org/10.23641/asha.27910710.</p>","PeriodicalId":51254,"journal":{"name":"Journal of Speech Language and Hearing Research","volume":" ","pages":"40-53"},"PeriodicalIF":2.2,"publicationDate":"2025-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142787502","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-02Epub Date: 2024-12-03DOI: 10.1044/2024_JSLHR-24-00426
Emre Orhan, İsa Tuncay Batuk, Merve Ozbal Batuk
Purpose: The aim of this study was to investigate the balance performances of young adults with unilateral cochlear implants (CIs) in a dual-task condition.
Method: Fifteen young adults with unilateral CIs and 15 healthy individuals were included in the study. The balance task was applied using the Sensory Organization Test via Computerized Dynamic Posturography. The Backward Digit Recall task was applied as an additional concurrent cognitive task. In the balance task, participants completed four different conditions, which gradually became more difficult: Condition 1: fixed platform, eyes open; Condition 3: fixed platform, eyes open and visual environment sway; Condition 4: platform sway, eyes open; Condition 6: platform sway, eyes open and visual environment sway. To evaluate the dual-task condition performance, participants were given cognitive and motor tasks simultaneously.
Results: Visual (p = .016), vestibular (p < .001), and composite balance scores (p < .001) of CI users were statistically significantly lower than the control group. Condition 3 (p = .003), Condition 4 (p = .007), and Condition 6 (p < .001) balance scores of CI users in the single-task condition were statistically significantly lower than controls. Condition 6 (p < .001) balance scores of CI users in the dual-task condition were statistically significantly lower than the control group. Condition 1 score (p = .002) of the CI users in the dual-task condition showed a statistically significant decrease compared to the balance score in the single-task condition, while the Condition 6 score (p = .011) in the dual-task condition was statistically significantly higher than the balance score in the single-task condition.
Conclusions: The balance performance of individuals with CIs in the dual-task condition was worse than typical healthy individuals. It can be suggested that dual-task performances should be included in the vestibular rehabilitation process in CI users in the implantation process in terms of balance abilities in multitasking conditions and risk of falling.
{"title":"Concurrent Cognitive Task Alters Postural Control Performance of Young Adults With Unilateral Cochlear Implants.","authors":"Emre Orhan, İsa Tuncay Batuk, Merve Ozbal Batuk","doi":"10.1044/2024_JSLHR-24-00426","DOIUrl":"10.1044/2024_JSLHR-24-00426","url":null,"abstract":"<p><strong>Purpose: </strong>The aim of this study was to investigate the balance performances of young adults with unilateral cochlear implants (CIs) in a dual-task condition.</p><p><strong>Method: </strong>Fifteen young adults with unilateral CIs and 15 healthy individuals were included in the study. The balance task was applied using the Sensory Organization Test via Computerized Dynamic Posturography. The Backward Digit Recall task was applied as an additional concurrent cognitive task. In the balance task, participants completed four different conditions, which gradually became more difficult: Condition 1: fixed platform, eyes open; Condition 3: fixed platform, eyes open and visual environment sway; Condition 4: platform sway, eyes open; Condition 6: platform sway, eyes open and visual environment sway. To evaluate the dual-task condition performance, participants were given cognitive and motor tasks simultaneously.</p><p><strong>Results: </strong>Visual (<i>p</i> = .016), vestibular (<i>p</i> < .001), and composite balance scores (<i>p</i> < .001) of CI users were statistically significantly lower than the control group. Condition 3 (<i>p</i> = .003), Condition 4 (<i>p</i> = .007), and Condition 6 (<i>p</i> < .001) balance scores of CI users in the single-task condition were statistically significantly lower than controls. Condition 6 (<i>p</i> < .001) balance scores of CI users in the dual-task condition were statistically significantly lower than the control group. Condition 1 score (<i>p</i> = .002) of the CI users in the dual-task condition showed a statistically significant decrease compared to the balance score in the single-task condition, while the Condition 6 score (<i>p</i> = .011) in the dual-task condition was statistically significantly higher than the balance score in the single-task condition.</p><p><strong>Conclusions: </strong>The balance performance of individuals with CIs in the dual-task condition was worse than typical healthy individuals. It can be suggested that dual-task performances should be included in the vestibular rehabilitation process in CI users in the implantation process in terms of balance abilities in multitasking conditions and risk of falling.</p>","PeriodicalId":51254,"journal":{"name":"Journal of Speech Language and Hearing Research","volume":" ","pages":"377-387"},"PeriodicalIF":2.2,"publicationDate":"2025-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142774359","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-02Epub Date: 2024-12-02DOI: 10.1044/2024_JSLHR-24-00296
Margaret K Miller, Vahid Delaram, Allison Trine, Rohit M Ananthanarayana, Emily Buss, Brian B Monson, G Christopher Stecker
Introduction: We currently lack speech testing materials faithful to broader aspects of real-world auditory scenes such as speech directivity and extended high frequency (EHF; > 8 kHz) content that have demonstrable effects on speech perception. Here, we describe the development of a multidirectional, high-fidelity speech corpus using multichannel anechoic recordings that can be used for future studies of speech perception in complex environments by diverse listeners.
Design: Fifteen male and 15 female talkers (21.3-60.5 years) recorded Bamford-Kowal-Bench (BKB) Standard Sentence Test lists, digits 0-10, and a 2.5-min unscripted narrative. Recordings were made in an anechoic chamber with 17 free-field condenser microphones spanning 0°-180° azimuth angle around the talker using a 48 kHz sampling rate.
Results: Recordings resulted in a large corpus containing four BKB lists, 10 digits, and narratives produced by 30 talkers, and an additional 17 BKB lists (21 total) produced by a subset of six talkers.
Conclusions: The goal of this study was to create an anechoic, high-fidelity, multidirectional speech corpus using standard speech materials. More naturalistic narratives, useful for the creation of babble noise and speech maskers, were also recorded. A large group of 30 talkers permits testers to select speech materials based on talker characteristics relevant to a specific task. The resulting speech corpus allows for more diverse and precise speech recognition testing, including testing effects of speech directivity and EHF content. Recordings are publicly available.
{"title":"An Anechoic, High-Fidelity, Multidirectional Speech Corpus.","authors":"Margaret K Miller, Vahid Delaram, Allison Trine, Rohit M Ananthanarayana, Emily Buss, Brian B Monson, G Christopher Stecker","doi":"10.1044/2024_JSLHR-24-00296","DOIUrl":"10.1044/2024_JSLHR-24-00296","url":null,"abstract":"<p><strong>Introduction: </strong>We currently lack speech testing materials faithful to broader aspects of real-world auditory scenes such as speech directivity and extended high frequency (EHF; > 8 kHz) content that have demonstrable effects on speech perception. Here, we describe the development of a multidirectional, high-fidelity speech corpus using multichannel anechoic recordings that can be used for future studies of speech perception in complex environments by diverse listeners.</p><p><strong>Design: </strong>Fifteen male and 15 female talkers (21.3-60.5 years) recorded Bamford-Kowal-Bench (BKB) Standard Sentence Test lists, digits 0-10, and a 2.5-min unscripted narrative. Recordings were made in an anechoic chamber with 17 free-field condenser microphones spanning 0°-180° azimuth angle around the talker using a 48 kHz sampling rate.</p><p><strong>Results: </strong>Recordings resulted in a large corpus containing four BKB lists, 10 digits, and narratives produced by 30 talkers, and an additional 17 BKB lists (21 total) produced by a subset of six talkers.</p><p><strong>Conclusions: </strong>The goal of this study was to create an anechoic, high-fidelity, multidirectional speech corpus using standard speech materials. More naturalistic narratives, useful for the creation of babble noise and speech maskers, were also recorded. A large group of 30 talkers permits testers to select speech materials based on talker characteristics relevant to a specific task. The resulting speech corpus allows for more diverse and precise speech recognition testing, including testing effects of speech directivity and EHF content. Recordings are publicly available.</p>","PeriodicalId":51254,"journal":{"name":"Journal of Speech Language and Hearing Research","volume":" ","pages":"411-418"},"PeriodicalIF":2.2,"publicationDate":"2025-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142774337","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}