Pub Date : 2025-01-02Epub Date: 2024-11-19DOI: 10.1044/2024_JSLHR-23-00678
Michael F Dorman, Sarah C Natale, Nadine Buczak, Josh Stohl, Francesco Acciai, Andreas Büchner
Purpose: The aims of this exploratory study were (a) to assess common terms used to describe cochlear implant (CI) sound quality by patients fit with conventional CIs and (b) to compare those descriptors to previously obtained acoustic matches to CI sound quality created by single-sided deaf (SSD) patients for their normal-hearing ear.
Method: CI patients fit with Advanced Bionics (AB; n = 89), Cochlear Corporation (n = 86), and MED-EL (n = 80) implants were the participants. The patients filled out a questionnaire about CI sound quality for two time points: For the time near activation (T1) from memory and at the time of filling out the questionnaire (T2). The mean CI experience at T2 for the three groups ranged from 4 to 8 years. The questionnaire was composed of 25 adjectives describing sound quality.
Results: For T1, the most commonly used descriptors were Computer-like, Treble-y, Metallic, and Mickey Mouse-like. A superordinate category of HiPitched (High Pitched) gathered significantly more responses from patients with shorter electrode arrays (AB and Cochlear) than patients with longer arrays (MED-EL). At T2, the most common descriptor was Clear and was chosen by approximately two thirds of the patients. The between-group differences in responses to items in the HiPitched category, present at T1, were absent at T2.
Conclusions: The questionnaire data from conventional CI patients differs from previous sound matching data collected from SSD-CI patients. Alterations to the spectral composition of the signal are less salient to experienced conventional patients than to experienced SSD-CI patients. This is likely due to the absence, for conventional patients, of an exemplar in an NH ear against which to judge CI sound quality.
目的:本探索性研究的目的是:(a) 评估佩戴传统人工耳蜗的患者描述人工耳蜗(CI)音质的常用术语;(b) 将这些描述术语与之前获得的单侧耳聋(SSD)患者为其正常听力耳朵创建的 CI 音质声学匹配进行比较:方法:参与者包括植入先进仿生公司(AB;n = 89)、科利耳公司(n = 86)和 MED-EL 公司(n = 80)植入体的 CI 患者。患者在两个时间点填写了有关 CI 音质的问卷:在记忆中接近激活时(T1)和填写问卷时(T2)。三组患者在 T2 阶段的平均 CI 使用年限为 4 至 8 年不等。问卷由 25 个描述音质的形容词组成:在 T1,最常用的描述词是电脑音质、高音音质、金属音质和米老鼠音质。较短电极阵列(AB 和耳蜗)的患者对 HiPitched(高音调)这一上位词的回答明显多于较长电极阵列(MED-EL)的患者。在 T2 阶段,最常见的描述词是 "清晰",约有三分之二的患者选择了这一描述词。对 "HiPitched "类项目的回答在 T1 存在组间差异,但在 T2 则不存在:传统 CI 患者的问卷数据与之前从 SSD-CI 患者收集的声音匹配数据有所不同。与有经验的 SSD-CI 患者相比,有经验的传统 CI 患者对信号频谱组成的改变不那么敏感。这很可能是由于传统患者缺乏可用于判断 CI 音质的 NH 耳范例。
{"title":"Cochlear Implant Sound Quality.","authors":"Michael F Dorman, Sarah C Natale, Nadine Buczak, Josh Stohl, Francesco Acciai, Andreas Büchner","doi":"10.1044/2024_JSLHR-23-00678","DOIUrl":"10.1044/2024_JSLHR-23-00678","url":null,"abstract":"<p><strong>Purpose: </strong>The aims of this exploratory study were (a) to assess common terms used to describe cochlear implant (CI) sound quality by patients fit with conventional CIs and (b) to compare those descriptors to previously obtained acoustic matches to CI sound quality created by single-sided deaf (SSD) patients for their normal-hearing ear.</p><p><strong>Method: </strong>CI patients fit with Advanced Bionics (AB; <i>n</i> = 89), Cochlear Corporation (<i>n</i> = 86), and MED-EL (<i>n</i> = 80) implants were the participants. The patients filled out a questionnaire about CI sound quality for two time points: For the time near activation (T1) from memory and at the time of filling out the questionnaire (T2). The mean CI experience at T2 for the three groups ranged from 4 to 8 years. The questionnaire was composed of 25 adjectives describing sound quality.</p><p><strong>Results: </strong>For T1, the most commonly used descriptors were Computer-like, Treble-y, Metallic, and Mickey Mouse-like. A superordinate category of HiPitched (High Pitched) gathered significantly more responses from patients with shorter electrode arrays (AB and Cochlear) than patients with longer arrays (MED-EL). At T2, the most common descriptor was Clear and was chosen by approximately two thirds of the patients. The between-group differences in responses to items in the HiPitched category, present at T1, were absent at T2.</p><p><strong>Conclusions: </strong>The questionnaire data from conventional CI patients differs from previous sound matching data collected from SSD-CI patients. Alterations to the spectral composition of the signal are less salient to experienced conventional patients than to experienced SSD-CI patients. This is likely due to the absence, for conventional patients, of an exemplar in an NH ear against which to judge CI sound quality.</p>","PeriodicalId":51254,"journal":{"name":"Journal of Speech Language and Hearing Research","volume":" ","pages":"323-331"},"PeriodicalIF":2.2,"publicationDate":"2025-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142670003","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-02Epub Date: 2024-12-17DOI: 10.1044/2024_JSLHR-24-00292
Cassandra Alighieri, Camille De Coster, Kim Bettens, Valerie Pereira
Purpose: This study compared the occurrence of different types of generalization (within-class, across-class, and total generalization) following motor-phonetic speech therapy and linguistic-phonological speech therapy in children with a cleft palate ± cleft lip (CP ± L).
Method: Thirteen children with a CP ± L (Mage = 7.50 years) who previously participated in a block-randomized, sham-controlled design comparing motor-phonetic therapy (n = 7) and linguistic-phonological therapy (n = 6) participated in this study. Speech samples consisting of word imitation and sentence imitation were collected on different data points before and after therapy and perceptually assessed using the Dutch translation of the Cleft Audit Protocol for Speech-Augmented. The percentages within-class, across-class, and total generalization were calculated for the different target consonants. Generalization in the two groups was compared over time using linear mixed models (LMMs).
Results: LMM revealed significant Time × Group interactions for the percentage within-class generalization in sentence imitation and total generalization in sentence imitation tasks indicating that these percentages were significantly higher in the group of children who received linguistic-phonological intervention. No Time × Group interactions were found for the percentages across-class generalization.
Conclusions: Generalization can occur following both motor-phonetic intervention as well as linguistic-phonological intervention. A linguistic-phonological approach, however, was observed to result in larger percentages of within-class and total generalization scores. As children with a CP ± L often receive yearlong intervention to eliminate cleft-related speech sound errors, these findings on the superior generalization effects of linguistic-phonological intervention are important to consider in clinical practice.
{"title":"Does Generalization Occur Following Speech Therapy? A Study in Children With a Cleft Palate.","authors":"Cassandra Alighieri, Camille De Coster, Kim Bettens, Valerie Pereira","doi":"10.1044/2024_JSLHR-24-00292","DOIUrl":"10.1044/2024_JSLHR-24-00292","url":null,"abstract":"<p><strong>Purpose: </strong>This study compared the occurrence of different types of generalization (within-class, across-class, and total generalization) following motor-phonetic speech therapy and linguistic-phonological speech therapy in children with a cleft palate ± cleft lip (CP ± L).</p><p><strong>Method: </strong>Thirteen children with a CP ± L (<i>M</i><sub>age</sub> = 7.50 years) who previously participated in a block-randomized, sham-controlled design comparing motor-phonetic therapy (<i>n</i> = 7) and linguistic-phonological therapy (<i>n</i> = 6) participated in this study. Speech samples consisting of word imitation and sentence imitation were collected on different data points before and after therapy and perceptually assessed using the Dutch translation of the Cleft Audit Protocol for Speech-Augmented. The percentages within-class, across-class, and total generalization were calculated for the different target consonants. Generalization in the two groups was compared over time using linear mixed models (LMMs).</p><p><strong>Results: </strong>LMM revealed significant Time × Group interactions for the percentage within-class generalization in sentence imitation and total generalization in sentence imitation tasks indicating that these percentages were significantly higher in the group of children who received linguistic-phonological intervention. No Time × Group interactions were found for the percentages across-class generalization.</p><p><strong>Conclusions: </strong>Generalization can occur following both motor-phonetic intervention as well as linguistic-phonological intervention. A linguistic-phonological approach, however, was observed to result in larger percentages of within-class and total generalization scores. As children with a CP ± L often receive yearlong intervention to eliminate cleft-related speech sound errors, these findings on the superior generalization effects of linguistic-phonological intervention are important to consider in clinical practice.</p>","PeriodicalId":51254,"journal":{"name":"Journal of Speech Language and Hearing Research","volume":" ","pages":"91-104"},"PeriodicalIF":2.2,"publicationDate":"2025-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142848142","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Purpose: The Language ENvironment Analysis (LENA) technology uses automated speech processing (ASP) algorithms to estimate counts such as total adult words and child vocalizations, which helps understand children's early language environment. This ASP has been validated in North American English and other languages in predominantly monolingual contexts but not in a multilingual context like India. Thus, the current study aims to validate the classification accuracy of the LENA algorithm specifically focusing on speaker recognition of adult segments (AdS) and child segments (ChS) in a sample of bi/multilingual families from India.
Method: Thirty neurotypical children between 6 and 24 months (M = 12.89, SD = 4.95) were recruited. Participants were growing up in bi/multilingual environment hearing a combination of Kannada, Tamil, Malayalam, Telugu, Hindi, and/or English. Daylong audio recordings were collected using LENA and processed using the ASP to automatically detect segments across speaker categories. Two human annotators manually annotated ~900 min (37,431 segments across speaker categories). Performance accuracy (recall and precision) was calculated for AdS and ChS.
Results: The recall and precision for AdS were 0.62 (95% confidence interval [CI] [0.61, 0.63]) and 0.83 (95% CI [0.8, 0.83]), respectively. This indicated that 62% of the segments identified as AdS by the human annotator were also identified as AdS by the LENA ASP algorithm and 83% of the segments labeled by the LENA ASP as AdS were also labeled by the human annotator as AdS. Similarly, the recall and precision for ChS were 0.65 (95% CI [0.64, 0.66]) and 0.55 (95% CI [0.54, 0.56]), respectively.
Conclusions: This study documents the performance of the ASP in correctly classifying speakers as adult or child in a sample of families from India, indicating recall and precision that is relatively low. This study lays the groundwork for future investigations aiming to refine the algorithm models, potentially facilitating more accurate performance in bi/multilingual societies like India.
{"title":"Validation of the Language ENvironment Analysis (LENA) Automated Speech Processing Algorithm Labels for Adult and Child Segments in a Sample of Families From India.","authors":"Shoba S Meera, Divya Swaminathan, Sri Ranjani Venkata Murali, Reny Raju, Malavi Srikar, Sahana Shyam Sundar, Senthil Amudhan, Alejandrina Cristia, Rahul Pawar, Achuth Rao, Prathyusha P Vasuki, Shree Volme, Ashok Mysore","doi":"10.1044/2024_JSLHR-24-00099","DOIUrl":"10.1044/2024_JSLHR-24-00099","url":null,"abstract":"<p><strong>Purpose: </strong>The Language ENvironment Analysis (LENA) technology uses automated speech processing (ASP) algorithms to estimate counts such as total adult words and child vocalizations, which helps understand children's early language environment. This ASP has been validated in North American English and other languages in predominantly monolingual contexts but not in a multilingual context like India. Thus, the current study aims to validate the classification accuracy of the LENA algorithm specifically focusing on speaker recognition of adult segments (AdS) and child segments (ChS) in a sample of bi/multilingual families from India.</p><p><strong>Method: </strong>Thirty neurotypical children between 6 and 24 months (<i>M</i> = 12.89, <i>SD</i> = 4.95) were recruited. Participants were growing up in bi/multilingual environment hearing a combination of Kannada, Tamil, Malayalam, Telugu, Hindi, and/or English. Daylong audio recordings were collected using LENA and processed using the ASP to automatically detect segments across speaker categories. Two human annotators manually annotated ~900 min (37,431 segments across speaker categories). Performance accuracy (recall and precision) was calculated for AdS and ChS.</p><p><strong>Results: </strong>The recall and precision for AdS were 0.62 (95% confidence interval [CI] [0.61, 0.63]) and 0.83 (95% CI [0.8, 0.83]), respectively. This indicated that 62% of the segments identified as AdS by the human annotator were also identified as AdS by the LENA ASP algorithm and 83% of the segments labeled by the LENA ASP as AdS were also labeled by the human annotator as AdS. Similarly, the recall and precision for ChS were 0.65 (95% CI [0.64, 0.66]) and 0.55 (95% CI [0.54, 0.56]), respectively.</p><p><strong>Conclusions: </strong>This study documents the performance of the ASP in correctly classifying speakers as adult or child in a sample of families from India, indicating recall and precision that is relatively low. This study lays the groundwork for future investigations aiming to refine the algorithm models, potentially facilitating more accurate performance in bi/multilingual societies like India.</p><p><strong>Supplemental material: </strong>https://doi.org/10.23641/asha.27910710.</p>","PeriodicalId":51254,"journal":{"name":"Journal of Speech Language and Hearing Research","volume":" ","pages":"40-53"},"PeriodicalIF":2.2,"publicationDate":"2025-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142787502","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-02Epub Date: 2024-12-03DOI: 10.1044/2024_JSLHR-24-00426
Emre Orhan, İsa Tuncay Batuk, Merve Ozbal Batuk
Purpose: The aim of this study was to investigate the balance performances of young adults with unilateral cochlear implants (CIs) in a dual-task condition.
Method: Fifteen young adults with unilateral CIs and 15 healthy individuals were included in the study. The balance task was applied using the Sensory Organization Test via Computerized Dynamic Posturography. The Backward Digit Recall task was applied as an additional concurrent cognitive task. In the balance task, participants completed four different conditions, which gradually became more difficult: Condition 1: fixed platform, eyes open; Condition 3: fixed platform, eyes open and visual environment sway; Condition 4: platform sway, eyes open; Condition 6: platform sway, eyes open and visual environment sway. To evaluate the dual-task condition performance, participants were given cognitive and motor tasks simultaneously.
Results: Visual (p = .016), vestibular (p < .001), and composite balance scores (p < .001) of CI users were statistically significantly lower than the control group. Condition 3 (p = .003), Condition 4 (p = .007), and Condition 6 (p < .001) balance scores of CI users in the single-task condition were statistically significantly lower than controls. Condition 6 (p < .001) balance scores of CI users in the dual-task condition were statistically significantly lower than the control group. Condition 1 score (p = .002) of the CI users in the dual-task condition showed a statistically significant decrease compared to the balance score in the single-task condition, while the Condition 6 score (p = .011) in the dual-task condition was statistically significantly higher than the balance score in the single-task condition.
Conclusions: The balance performance of individuals with CIs in the dual-task condition was worse than typical healthy individuals. It can be suggested that dual-task performances should be included in the vestibular rehabilitation process in CI users in the implantation process in terms of balance abilities in multitasking conditions and risk of falling.
{"title":"Concurrent Cognitive Task Alters Postural Control Performance of Young Adults With Unilateral Cochlear Implants.","authors":"Emre Orhan, İsa Tuncay Batuk, Merve Ozbal Batuk","doi":"10.1044/2024_JSLHR-24-00426","DOIUrl":"10.1044/2024_JSLHR-24-00426","url":null,"abstract":"<p><strong>Purpose: </strong>The aim of this study was to investigate the balance performances of young adults with unilateral cochlear implants (CIs) in a dual-task condition.</p><p><strong>Method: </strong>Fifteen young adults with unilateral CIs and 15 healthy individuals were included in the study. The balance task was applied using the Sensory Organization Test via Computerized Dynamic Posturography. The Backward Digit Recall task was applied as an additional concurrent cognitive task. In the balance task, participants completed four different conditions, which gradually became more difficult: Condition 1: fixed platform, eyes open; Condition 3: fixed platform, eyes open and visual environment sway; Condition 4: platform sway, eyes open; Condition 6: platform sway, eyes open and visual environment sway. To evaluate the dual-task condition performance, participants were given cognitive and motor tasks simultaneously.</p><p><strong>Results: </strong>Visual (<i>p</i> = .016), vestibular (<i>p</i> < .001), and composite balance scores (<i>p</i> < .001) of CI users were statistically significantly lower than the control group. Condition 3 (<i>p</i> = .003), Condition 4 (<i>p</i> = .007), and Condition 6 (<i>p</i> < .001) balance scores of CI users in the single-task condition were statistically significantly lower than controls. Condition 6 (<i>p</i> < .001) balance scores of CI users in the dual-task condition were statistically significantly lower than the control group. Condition 1 score (<i>p</i> = .002) of the CI users in the dual-task condition showed a statistically significant decrease compared to the balance score in the single-task condition, while the Condition 6 score (<i>p</i> = .011) in the dual-task condition was statistically significantly higher than the balance score in the single-task condition.</p><p><strong>Conclusions: </strong>The balance performance of individuals with CIs in the dual-task condition was worse than typical healthy individuals. It can be suggested that dual-task performances should be included in the vestibular rehabilitation process in CI users in the implantation process in terms of balance abilities in multitasking conditions and risk of falling.</p>","PeriodicalId":51254,"journal":{"name":"Journal of Speech Language and Hearing Research","volume":" ","pages":"377-387"},"PeriodicalIF":2.2,"publicationDate":"2025-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142774359","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-02Epub Date: 2024-12-02DOI: 10.1044/2024_JSLHR-24-00296
Margaret K Miller, Vahid Delaram, Allison Trine, Rohit M Ananthanarayana, Emily Buss, Brian B Monson, G Christopher Stecker
Introduction: We currently lack speech testing materials faithful to broader aspects of real-world auditory scenes such as speech directivity and extended high frequency (EHF; > 8 kHz) content that have demonstrable effects on speech perception. Here, we describe the development of a multidirectional, high-fidelity speech corpus using multichannel anechoic recordings that can be used for future studies of speech perception in complex environments by diverse listeners.
Design: Fifteen male and 15 female talkers (21.3-60.5 years) recorded Bamford-Kowal-Bench (BKB) Standard Sentence Test lists, digits 0-10, and a 2.5-min unscripted narrative. Recordings were made in an anechoic chamber with 17 free-field condenser microphones spanning 0°-180° azimuth angle around the talker using a 48 kHz sampling rate.
Results: Recordings resulted in a large corpus containing four BKB lists, 10 digits, and narratives produced by 30 talkers, and an additional 17 BKB lists (21 total) produced by a subset of six talkers.
Conclusions: The goal of this study was to create an anechoic, high-fidelity, multidirectional speech corpus using standard speech materials. More naturalistic narratives, useful for the creation of babble noise and speech maskers, were also recorded. A large group of 30 talkers permits testers to select speech materials based on talker characteristics relevant to a specific task. The resulting speech corpus allows for more diverse and precise speech recognition testing, including testing effects of speech directivity and EHF content. Recordings are publicly available.
{"title":"An Anechoic, High-Fidelity, Multidirectional Speech Corpus.","authors":"Margaret K Miller, Vahid Delaram, Allison Trine, Rohit M Ananthanarayana, Emily Buss, Brian B Monson, G Christopher Stecker","doi":"10.1044/2024_JSLHR-24-00296","DOIUrl":"10.1044/2024_JSLHR-24-00296","url":null,"abstract":"<p><strong>Introduction: </strong>We currently lack speech testing materials faithful to broader aspects of real-world auditory scenes such as speech directivity and extended high frequency (EHF; > 8 kHz) content that have demonstrable effects on speech perception. Here, we describe the development of a multidirectional, high-fidelity speech corpus using multichannel anechoic recordings that can be used for future studies of speech perception in complex environments by diverse listeners.</p><p><strong>Design: </strong>Fifteen male and 15 female talkers (21.3-60.5 years) recorded Bamford-Kowal-Bench (BKB) Standard Sentence Test lists, digits 0-10, and a 2.5-min unscripted narrative. Recordings were made in an anechoic chamber with 17 free-field condenser microphones spanning 0°-180° azimuth angle around the talker using a 48 kHz sampling rate.</p><p><strong>Results: </strong>Recordings resulted in a large corpus containing four BKB lists, 10 digits, and narratives produced by 30 talkers, and an additional 17 BKB lists (21 total) produced by a subset of six talkers.</p><p><strong>Conclusions: </strong>The goal of this study was to create an anechoic, high-fidelity, multidirectional speech corpus using standard speech materials. More naturalistic narratives, useful for the creation of babble noise and speech maskers, were also recorded. A large group of 30 talkers permits testers to select speech materials based on talker characteristics relevant to a specific task. The resulting speech corpus allows for more diverse and precise speech recognition testing, including testing effects of speech directivity and EHF content. Recordings are publicly available.</p>","PeriodicalId":51254,"journal":{"name":"Journal of Speech Language and Hearing Research","volume":" ","pages":"411-418"},"PeriodicalIF":2.2,"publicationDate":"2025-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142774337","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-02Epub Date: 2024-12-05DOI: 10.1044/2024_JSLHR-23-00794
Jennifer E Markfeld, Zoë Kiemel, Pooja Santapuram, Samantha L Bordman, Grace Pulliam, S Madison Clark, Lauren H Hampton, Bahar Keçeli-Kaysili, Jacob I Feldman, Tiffany G Woynaroski
Purpose: The present study explored the extent to which early prelinguistic communication skills predict expressive language in toddlers with autistic siblings (Sibs-autism), who are known to be at high likelihood for autism and language disorder, and a comparison group of toddlers with non-autistic older siblings (Sibs-NA).
Method: Participants were 51 toddlers (29 Sibs-autism, 22 Sibs-NA) aged 12-18 months at the first time point in the study (Time 1). Toddlers were seen again 9 months later (Time 2). Three prelinguistic communication skills (i.e., intentional communication, vocalization complexity, and responding to joint attention) were measured at Time 1 via the Communication and Symbolic Behavior Scales Developmental Profile-Behavior Sample. An expressive language aggregate was calculated for each participant at Time 2. A series of correlation and multiple regression models was run to evaluate associations of interest between prelinguistic communication skills as measured at Time 1 and expressive language as measured at Time 2.
Results: Vocalization complexity and intentional communication displayed significant zero-order correlations with expressive language across sibling groups. Vocal complexity and responding to joint attention did not have significant added value in predicting later expressive language, after covarying for intentional communication across groups. However, sibling group moderated the association between vocalization complexity and later expressive language, such that vocal complexity displayed incremental validity for predicting later expressive language, covarying for intentional communication, only within Sibs-NA.
Conclusions: Results indicate that prelinguistic communication skills, in particular intentional communication, show promise for predicting later expressive language in siblings of autistic children. These findings provide additional empirical support for the notion that early preemptive interventions targeting prelinguistic communication skills, especially intentional communication, may have the potential to scaffold language acquisition and support more optimal language outcomes in this population at high likelihood for a future diagnosis of both autism and language disorder.
{"title":"Links Between Early Prelinguistic Communication and Later Expressive Language in Toddlers With Autistic and Non-Autistic Siblings.","authors":"Jennifer E Markfeld, Zoë Kiemel, Pooja Santapuram, Samantha L Bordman, Grace Pulliam, S Madison Clark, Lauren H Hampton, Bahar Keçeli-Kaysili, Jacob I Feldman, Tiffany G Woynaroski","doi":"10.1044/2024_JSLHR-23-00794","DOIUrl":"10.1044/2024_JSLHR-23-00794","url":null,"abstract":"<p><strong>Purpose: </strong>The present study explored the extent to which early prelinguistic communication skills predict expressive language in toddlers with autistic siblings (Sibs-autism), who are known to be at high likelihood for autism and language disorder, and a comparison group of toddlers with non-autistic older siblings (Sibs-NA).</p><p><strong>Method: </strong>Participants were 51 toddlers (29 Sibs-autism, 22 Sibs-NA) aged 12-18 months at the first time point in the study (Time 1). Toddlers were seen again 9 months later (Time 2). Three prelinguistic communication skills (i.e., intentional communication, vocalization complexity, and responding to joint attention) were measured at Time 1 via the Communication and Symbolic Behavior Scales Developmental Profile-Behavior Sample. An expressive language aggregate was calculated for each participant at Time 2. A series of correlation and multiple regression models was run to evaluate associations of interest between prelinguistic communication skills as measured at Time 1 and expressive language as measured at Time 2.</p><p><strong>Results: </strong>Vocalization complexity and intentional communication displayed significant zero-order correlations with expressive language across sibling groups. Vocal complexity and responding to joint attention did not have significant added value in predicting later expressive language, after covarying for intentional communication across groups. However, sibling group moderated the association between vocalization complexity and later expressive language, such that vocal complexity displayed incremental validity for predicting later expressive language, covarying for intentional communication, only within Sibs-NA.</p><p><strong>Conclusions: </strong>Results indicate that prelinguistic communication skills, in particular intentional communication, show promise for predicting later expressive language in siblings of autistic children. These findings provide additional empirical support for the notion that early preemptive interventions targeting prelinguistic communication skills, especially intentional communication, may have the potential to scaffold language acquisition and support more optimal language outcomes in this population at high likelihood for a future diagnosis of both autism and language disorder.</p><p><strong>Supplemental material: </strong>https://doi.org/10.23641/asha.27745437.</p>","PeriodicalId":51254,"journal":{"name":"Journal of Speech Language and Hearing Research","volume":" ","pages":"178-192"},"PeriodicalIF":2.2,"publicationDate":"2025-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142787294","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-02Epub Date: 2024-12-13DOI: 10.1044/2024_JSLHR-24-00133
Erin M Picou, Hilary Davis, Leigh Anne Tang, Lisa Bastarache, Anne Marie Tharpe
Purpose: School-age children with unilateral hearing loss are at an increased risk of exhibiting academic difficulties. Yet, approximately half of children with unilateral hearing loss will not require additional support. There is a dearth of information to assist in determining which of these children will express academic deficits and which will not. The purpose of this study was to identify hearing- and health-related factors that contribute to adverse educational progress in children with permanent unilateral hearing loss. Specific indicators of academic concern identified during school age included the need for specialized academic services, receipt of speech-language therapy, or parent/teacher concerns for academics or speech-language development.
Method: This study provides an in-depth analysis of a previously described patient cohort developed from de-identified electronic health records. Factors of interest included potentially relevant hearing-related risk factors (e.g., degree, type, and laterality of hearing loss), in addition to health-related factors that could be extracted from the electronic health records (e.g., sex, premature birth, history of significant otitis media).
Results: Being born preterm, having a history of pressure equalization tubes or having conductive or mixed hearing loss more than doubled the risk of demonstrating adverse educational progress. Laterality and degree of loss were generally not significantly related to academic progress.
Conclusions: Approximately half of school-age children with permanent unilateral hearing loss in this cohort experienced some academic challenges. Birth history and middle ear pathology were important predictors of adverse educational progress.
{"title":"Relationships Between Hearing-Related and Health-Related Variables in Academic Progress of Children With Unilateral Hearing Loss.","authors":"Erin M Picou, Hilary Davis, Leigh Anne Tang, Lisa Bastarache, Anne Marie Tharpe","doi":"10.1044/2024_JSLHR-24-00133","DOIUrl":"10.1044/2024_JSLHR-24-00133","url":null,"abstract":"<p><strong>Purpose: </strong>School-age children with unilateral hearing loss are at an increased risk of exhibiting academic difficulties. Yet, approximately half of children with unilateral hearing loss will not require additional support. There is a dearth of information to assist in determining which of these children will express academic deficits and which will not. The purpose of this study was to identify hearing- and health-related factors that contribute to adverse educational progress in children with permanent unilateral hearing loss. Specific indicators of academic concern identified during school age included the need for specialized academic services, receipt of speech-language therapy, or parent/teacher concerns for academics or speech-language development.</p><p><strong>Method: </strong>This study provides an in-depth analysis of a previously described patient cohort developed from de-identified electronic health records. Factors of interest included potentially relevant hearing-related risk factors (e.g., degree, type, and laterality of hearing loss), in addition to health-related factors that could be extracted from the electronic health records (e.g., sex, premature birth, history of significant otitis media).</p><p><strong>Results: </strong>Being born preterm, having a history of pressure equalization tubes or having conductive or mixed hearing loss more than doubled the risk of demonstrating adverse educational progress. Laterality and degree of loss were generally not significantly related to academic progress.</p><p><strong>Conclusions: </strong>Approximately half of school-age children with permanent unilateral hearing loss in this cohort experienced some academic challenges. Birth history and middle ear pathology were important predictors of adverse educational progress.</p>","PeriodicalId":51254,"journal":{"name":"Journal of Speech Language and Hearing Research","volume":" ","pages":"364-376"},"PeriodicalIF":2.2,"publicationDate":"2025-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142820070","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-02Epub Date: 2024-12-02DOI: 10.1044/2024_JSLHR-24-00162
Brandon O'Hanlon, Christopher J Plack, Helen E Nuttall
Purpose: In difficult listening conditions, the visual system assists with speech perception through lipreading. Stimulus onset asynchrony (SOA) is used to investigate the interaction between the two modalities in speech perception. Previous estimates of audiovisual benefit and SOA integration period differ widely. A limitation of previous research is a lack of consideration of visemes-categories of phonemes defined by similar lip movements when produced by a speaker-to ensure that selected phonemes are visually distinct. This study aimed to reassess the benefits of audiovisual lipreading to speech perception when different viseme categories are selected as stimuli and presented in noise. The study also aimed to investigate the effects of SOA on these stimuli.
Method: Sixty participants were tested online and presented with audio-only and audiovisual stimuli containing the speaker's lip movements. The speech was presented either with or without noise and had six different SOAs (0, 200, 216.6, 233.3, 250, and 266.6 ms). Participants discriminated between speech syllables with button presses.
Results: The benefit of visual information was weaker than that in previous studies. There was a significant increase in reaction times as SOA was introduced, but there were no significant effects of SOA on accuracy. Furthermore, exploratory analyses suggest that the effect was not equal across viseme categories: "Ba" was more difficult to recognize than "ka" in noise.
Conclusion: In summary, the findings suggest that the contributions of audiovisual integration to speech processing are weaker when considering visemes but are not sufficient to identify a full integration period.
{"title":"Reassessing the Benefits of Audiovisual Integration to Speech Perception and Intelligibility.","authors":"Brandon O'Hanlon, Christopher J Plack, Helen E Nuttall","doi":"10.1044/2024_JSLHR-24-00162","DOIUrl":"10.1044/2024_JSLHR-24-00162","url":null,"abstract":"<p><strong>Purpose: </strong>In difficult listening conditions, the visual system assists with speech perception through lipreading. Stimulus onset asynchrony (SOA) is used to investigate the interaction between the two modalities in speech perception. Previous estimates of audiovisual benefit and SOA integration period differ widely. A limitation of previous research is a lack of consideration of visemes-categories of phonemes defined by similar lip movements when produced by a speaker-to ensure that selected phonemes are visually distinct. This study aimed to reassess the benefits of audiovisual lipreading to speech perception when different viseme categories are selected as stimuli and presented in noise. The study also aimed to investigate the effects of SOA on these stimuli.</p><p><strong>Method: </strong>Sixty participants were tested online and presented with audio-only and audiovisual stimuli containing the speaker's lip movements. The speech was presented either with or without noise and had six different SOAs (0, 200, 216.6, 233.3, 250, and 266.6 ms). Participants discriminated between speech syllables with button presses.</p><p><strong>Results: </strong>The benefit of visual information was weaker than that in previous studies. There was a significant increase in reaction times as SOA was introduced, but there were no significant effects of SOA on accuracy. Furthermore, exploratory analyses suggest that the effect was not equal across viseme categories: \"Ba\" was more difficult to recognize than \"ka\" in noise.</p><p><strong>Conclusion: </strong>In summary, the findings suggest that the contributions of audiovisual integration to speech processing are weaker when considering visemes but are not sufficient to identify a full integration period.</p><p><strong>Supplemental material: </strong>https://doi.org/10.23641/asha.27641064.</p>","PeriodicalId":51254,"journal":{"name":"Journal of Speech Language and Hearing Research","volume":" ","pages":"26-39"},"PeriodicalIF":2.2,"publicationDate":"2025-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142774381","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-02Epub Date: 2024-12-12DOI: 10.1044/2024_JSLHR-24-00195
Andrea L B Ford, Marianne Elmquist, LeAnne D Johnson, Jon Tapp
Purpose: Estimating the sequential associations between educators' and children's talk during language learning interactions requires careful consideration of factors that may impact measurement stability and resultant inferences. This research note will describe a preliminary study that used generalizability theory to understand the contribution of two measurement conditions-occasions and raters-on estimates of sequential associations between educator talk and autistic preschooler talk in inclusive preschool classrooms.
Method: We used an existing data set of four 15-min video-recorded occasions of educator-child interactions for 11 autistic preschoolers during free-play in their inclusive classroom. Two trained raters coded all videos for preschooler talk and type of educator talk (i.e., opportunities for expressive language [OELs], other talk). We conducted two generalizability studies on sequential association estimates for two interaction directions (i.e., preschooler talk following educator OEL and educator talk following preschooler talk). We conducted a series of decision studies to explore configurations of measurement conditions to optimize future investigations.
Results: We had unstable estimates for both interaction directions in our current methodological approach, with raters accounting for minimal error and occasions accounting for considerable error. Future investigations would require at least six observation occasions for stable estimates of the sequential association between autistic preschooler talk following educator OEL that was stable after six occasions. More than 15 occasions were required for stable estimates of the association between educator talk following autistic preschooler talk.
Conclusion: We will share recommendations and implications for future investigations to estimate educator and child talk sequential associations within preschool language interactions.
{"title":"Preliminary Examination of the Stability of Sequential Associations Between the Talk of Educators and Autistic Preschoolers Using Generalizability Theory.","authors":"Andrea L B Ford, Marianne Elmquist, LeAnne D Johnson, Jon Tapp","doi":"10.1044/2024_JSLHR-24-00195","DOIUrl":"https://doi.org/10.1044/2024_JSLHR-24-00195","url":null,"abstract":"<p><strong>Purpose: </strong>Estimating the sequential associations between educators' and children's talk during language learning interactions requires careful consideration of factors that may impact measurement stability and resultant inferences. This research note will describe a preliminary study that used generalizability theory to understand the contribution of two measurement conditions-<i>occasions</i> and <i>raters</i>-on estimates of sequential associations between educator talk and autistic preschooler talk in inclusive preschool classrooms.</p><p><strong>Method: </strong>We used an existing data set of four 15-min video-recorded occasions of educator-child interactions for 11 autistic preschoolers during free-play in their inclusive classroom. Two trained raters coded all videos for preschooler talk and type of educator talk (i.e., opportunities for expressive language [OELs], other talk). We conducted two generalizability studies on sequential association estimates for two interaction directions (i.e., preschooler talk following educator OEL and educator talk following preschooler talk). We conducted a series of decision studies to explore configurations of measurement conditions to optimize future investigations.</p><p><strong>Results: </strong>We had unstable estimates for both interaction directions in our current methodological approach, with raters accounting for minimal error and occasions accounting for considerable error. Future investigations would require at least six observation occasions for stable estimates of the sequential association between autistic preschooler talk following educator OEL that was stable after six occasions. More than 15 occasions were required for stable estimates of the association between educator talk following autistic preschooler talk.</p><p><strong>Conclusion: </strong>We will share recommendations and implications for future investigations to estimate educator and child talk sequential associations within preschool language interactions.</p>","PeriodicalId":51254,"journal":{"name":"Journal of Speech Language and Hearing Research","volume":"68 1","pages":"248-258"},"PeriodicalIF":2.2,"publicationDate":"2025-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142923846","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-02Epub Date: 2024-12-18DOI: 10.1044/2024_JSLHR-24-00378
Danika L Pfeiffer, Austin Thompson, Brittany Ciullo, Micah E Hirsch, Mariam El Amin, Andrea Ford, Jessica Riccardi, Elaine Kearney
Purpose: The purpose of this qualitative study was to examine the perceptions of communication sciences and disorders (CSD) assistant professors in the United States related to barriers and facilitators to engaging in open science practices and identify opportunities for improving open science training and support in the field.
Method: Thirty-five assistant professors (16 from very high research activity [R1] institutions, 19 from institutions with other Carnegie classifications) participated in one 1-hr virtual focus group conducted via Zoom recording technology. The researchers used a conventional content analysis approach to analyze the focus group data and develop categories from the discussions.
Results: Five categories were developed from the focus groups: (a) a desire to learn about open science through opportunities for independent learning and learning with peers; (b) perceived benefits of engaging in open science on assistant professors' careers, the broader scientific community, and the quality of research in the field of CSD; (c) personal factors that act as barriers and/or facilitators to engaging in open science practices; (d) systemic factors that act as barriers and/or facilitators to engaging in open science practices; and (e) differences in perceptions of R1 and non-R1 assistant professors.
Conclusions: Assistant professors in CSD perceive benefits of open science for their careers, the scientific community, and the field. However, they face many barriers (e.g., time, lack of knowledge and training), which impede their engagement in open science practices. Preliminary recommendations for CSD assistant professors, academic institutions, publishers, and funding agencies are provided to reduce barriers to engagement in open science practices.
{"title":"\"1-800-Help-Me-With-Open-Science-Stuff\": A Qualitative Examination of Open Science Practices in Communication Sciences and Disorders.","authors":"Danika L Pfeiffer, Austin Thompson, Brittany Ciullo, Micah E Hirsch, Mariam El Amin, Andrea Ford, Jessica Riccardi, Elaine Kearney","doi":"10.1044/2024_JSLHR-24-00378","DOIUrl":"10.1044/2024_JSLHR-24-00378","url":null,"abstract":"<p><strong>Purpose: </strong>The purpose of this qualitative study was to examine the perceptions of communication sciences and disorders (CSD) assistant professors in the United States related to barriers and facilitators to engaging in open science practices and identify opportunities for improving open science training and support in the field.</p><p><strong>Method: </strong>Thirty-five assistant professors (16 from very high research activity [R1] institutions, 19 from institutions with other Carnegie classifications) participated in one 1-hr virtual focus group conducted via Zoom recording technology. The researchers used a conventional content analysis approach to analyze the focus group data and develop categories from the discussions.</p><p><strong>Results: </strong>Five categories were developed from the focus groups: (a) a desire to learn about open science through opportunities for independent learning and learning with peers; (b) perceived benefits of engaging in open science on assistant professors' careers, the broader scientific community, and the quality of research in the field of CSD; (c) personal factors that act as barriers and/or facilitators to engaging in open science practices; (d) systemic factors that act as barriers and/or facilitators to engaging in open science practices; and (e) differences in perceptions of R1 and non-R1 assistant professors.</p><p><strong>Conclusions: </strong>Assistant professors in CSD perceive benefits of open science for their careers, the scientific community, and the field. However, they face many barriers (e.g., time, lack of knowledge and training), which impede their engagement in open science practices. Preliminary recommendations for CSD assistant professors, academic institutions, publishers, and funding agencies are provided to reduce barriers to engagement in open science practices.</p><p><strong>Supplemental material: </strong>https://doi.org/10.23641/asha.27996839.</p>","PeriodicalId":51254,"journal":{"name":"Journal of Speech Language and Hearing Research","volume":" ","pages":"105-128"},"PeriodicalIF":2.2,"publicationDate":"2025-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142855985","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}