首页 > 最新文献

Journal of Speech Language and Hearing Research最新文献

英文 中文
Hearing Impairment: Reduced Pupil Dilation Response and Frontal Activation During Degraded Speech Perception. 听力障碍:语音感知能力下降时的瞳孔扩张反应和额叶激活减少。
IF 2.2 2区 医学 Q1 AUDIOLOGY & SPEECH-LANGUAGE PATHOLOGY Pub Date : 2024-11-07 Epub Date: 2024-10-11 DOI: 10.1044/2024_JSLHR-24-00017
Adriana A Zekveld, Sophia E Kramer, Dirk J Heslenfeld, Niek J Versfeld, Chris Vriend

Purpose: A relevant aspect of listening is the effort required during speech processing, which can be assessed by pupillometry. Here, we assessed the pupil dilation response of normal-hearing (NH) and hard of hearing (HH) individuals during listening to clear sentences and masked or degraded sentences. We combined this assessment with functional magnetic resonance imaging (fMRI) to investigate the neural correlates of the pupil dilation response.

Method: Seventeen NH participants (Mage = 46 years) were compared to 17 HH participants (Mage = 45 years) who were individually matched in age and educational level. Participants repeated sentences that were presented clearly, that were distorted, or that were masked. The sentence intelligibility level of masked and distorted sentences was 50% correct. Silent baseline trials were presented as well. Performance measures, pupil dilation responses, and fMRI data were acquired.

Results: HH individuals had overall poorer speech reception than the NH participants, but not for noise-vocoded speech. In addition, an interaction effect was observed with smaller pupil dilation responses in HH than in NH listeners for the degraded speech conditions. Hearing impairment was associated with higher activation across conditions in the left superior temporal gyrus, as compared to the silent baseline. However, the region of interest analysis indicated lower activation during degraded speech relative to clear speech in bilateral frontal regions and the insular cortex, for HH compared to NH listeners. Hearing impairment was also associated with a weaker relation between the pupil response and activation in the right inferior frontal gyrus. Overall, degraded speech evoked higher frontal activation than clear speech.

Conclusion: Brain areas associated with attentional and cognitive-control processes may be increasingly recruited when speech is degraded and are related to the pupil dilation response, but this relationship is weaker in HH listeners.

Supplemental material: https://doi.org/10.23641/asha.27162135.

目的:听力的一个相关方面是语音处理过程中所需的努力,这可以通过瞳孔测量进行评估。在此,我们评估了听力正常(NH)和听力困难(HH)的人在听清晰句子和遮蔽或降级句子时的瞳孔放大反应。我们将这一评估与功能磁共振成像(fMRI)相结合,研究瞳孔放大反应的神经相关性:我们将 17 名 NH 参与者(年龄 = 46 岁)与 17 名 HH 参与者(年龄 = 45 岁)进行了比较,后者在年龄和教育程度上都是匹配的。受试者重复清晰呈现、扭曲呈现或被遮蔽的句子。被遮蔽和扭曲的句子的正确率为 50%。同时还进行无声基线试验。测试结果、瞳孔放大反应和 fMRI 数据均已获得:结果:与正常人相比,高危人群对语音的接收能力总体较差,但对噪声编码语音的接收能力则不尽相同。此外,在降级语音条件下,HH 听者的瞳孔放大反应小于 NH 听者,这表明存在交互效应。与无声基线相比,听力障碍与左侧颞上回的高激活相关。然而,兴趣区分析表明,在降级语音中,与清晰语音相比,HH 听者的双侧额叶区和岛叶皮层的激活程度较低,而 NH 听者的双侧额叶区和岛叶皮层的激活程度较高。听力障碍还与瞳孔反应和右侧额叶下回激活之间的关系较弱有关。总体而言,退化的语音比清晰的语音引起更高的额叶激活:结论:当语音降级时,与注意力和认知控制过程相关的脑区可能会被更多地调用,并且与瞳孔放大反应相关,但这种关系在高听力障碍听者中更弱。补充材料:https://doi.org/10.23641/asha.27162135。
{"title":"Hearing Impairment: Reduced Pupil Dilation Response and Frontal Activation During Degraded Speech Perception.","authors":"Adriana A Zekveld, Sophia E Kramer, Dirk J Heslenfeld, Niek J Versfeld, Chris Vriend","doi":"10.1044/2024_JSLHR-24-00017","DOIUrl":"10.1044/2024_JSLHR-24-00017","url":null,"abstract":"<p><strong>Purpose: </strong>A relevant aspect of listening is the effort required during speech processing, which can be assessed by pupillometry. Here, we assessed the pupil dilation response of normal-hearing (NH) and hard of hearing (HH) individuals during listening to clear sentences and masked or degraded sentences. We combined this assessment with functional magnetic resonance imaging (fMRI) to investigate the neural correlates of the pupil dilation response.</p><p><strong>Method: </strong>Seventeen NH participants (<i>M</i><sub>age</sub> = 46 years) were compared to 17 HH participants (<i>M</i><sub>age</sub> = 45 years) who were individually matched in age and educational level. Participants repeated sentences that were presented clearly, that were distorted, or that were masked. The sentence intelligibility level of masked and distorted sentences was 50% correct. Silent baseline trials were presented as well. Performance measures, pupil dilation responses, and fMRI data were acquired.</p><p><strong>Results: </strong>HH individuals had overall poorer speech reception than the NH participants, but not for noise-vocoded speech. In addition, an interaction effect was observed with smaller pupil dilation responses in HH than in NH listeners for the degraded speech conditions. Hearing impairment was associated with higher activation across conditions in the left superior temporal gyrus, as compared to the silent baseline. However, the region of interest analysis indicated lower activation during degraded speech relative to clear speech in bilateral frontal regions and the insular cortex, for HH compared to NH listeners. Hearing impairment was also associated with a weaker relation between the pupil response and activation in the right inferior frontal gyrus. Overall, degraded speech evoked higher frontal activation than clear speech.</p><p><strong>Conclusion: </strong>Brain areas associated with attentional and cognitive-control processes may be increasingly recruited when speech is degraded and are related to the pupil dilation response, but this relationship is weaker in HH listeners.</p><p><strong>Supplemental material: </strong>https://doi.org/10.23641/asha.27162135.</p>","PeriodicalId":51254,"journal":{"name":"Journal of Speech Language and Hearing Research","volume":" ","pages":"4549-4566"},"PeriodicalIF":2.2,"publicationDate":"2024-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142407204","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An Articulatory Analysis of American English Rhotics in Children With and Without a History of Residual Speech Sound Disorder. 对有和无残余发音障碍史儿童的美式英语发音分析
IF 4.6 2区 医学 Q1 AUDIOLOGY & SPEECH-LANGUAGE PATHOLOGY Pub Date : 2024-11-07 Epub Date: 2024-10-14 DOI: 10.1044/2024_JSLHR-24-00037
Amanda Eads, Heather Kabakoff, Hannah King, Jonathan L Preston, Tara McAllister

Purpose: This study investigated articulatory patterns for American English /ɹ/ in children with and without a history of residual speech sound disorder (RSSD). It was hypothesized that children without RSSD would favor bunched tongue shapes, similar to American adults reported in previous literature. Based on clinical cueing practices, it was hypothesized that children with RSSD might produce retroflex tongue shape patterns at a higher relative rate. Finally, it was hypothesized that, among children who use a mixture of bunched and retroflex shapes, phonetic context would impact tongue shape as reported in the adult literature.

Method: These hypotheses were tested using ultrasound data from a stimulability task eliciting /ɹ/ in syllabic, postvocalic, and onset contexts. Participants were two groups of children/adolescents aged 9-15 years: 36 with RSSD who completed a study of ultrasound biofeedback treatment and 33 with no history of RSSD. Tongue shapes were qualitatively coded as bunched or retroflex using a flowchart from previous research.

Results: Children with no history of RSSD were found to use bunched-only tongue shape patterns at a rate higher than adults, but those who used a mixture of shapes for /ɹ/ followed the expected phonetic contextual patterning. Children with RSSD were found to use retroflex-only patterns at a substantially higher rate than adults, and those using a mixture of shapes did not exhibit the expected patterning by phonetic context.

Conclusions: These findings suggest that clients receiving ultrasound biofeedback treatment for /ɹ/ may be most responsive to clinician cueing of retroflex shapes, at least early on. However, retroflex-only cueing may be a limiting and insufficient strategy, particularly in light of our finding of a lack of typical variation across phonetic contexts in children with remediated /ɹ/. Future research should more specifically track cueing strategies to better understand the relationship between clinician cues, tongue shapes, and generalization across a range of contexts.

Supplemental material: https://doi.org/10.23641/asha.26801050.

目的:本研究调查了有和没有残余语音障碍(RSSD)病史的儿童的美式英语/ɹ/的发音模式。研究假设,无 RSSD 的儿童会偏爱束状舌形,这与以往文献中报道的美国成年人相似。根据临床提示实践,假设患有 RSSD 的儿童可能会以更高的相对比率产生反折舌型。最后,我们还假设,在混合使用束状舌形和倒复舌形的儿童中,语音环境会影响舌形,这与成人文献中的报道相同:我们使用超声波数据对上述假设进行了检验,超声波数据来自于在音节、后置音和起音语境中激发/ɹ/的刺激性任务。参与者为两组 9-15 岁的儿童/青少年:其中 36 人患有 RSSD 并完成了超声生物反馈治疗研究,33 人没有 RSSD 病史。使用先前研究的流程图对舌头形状进行定性编码,分为束状和后屈状:结果发现,无RSSD病史的儿童使用束状舌形的比例高于成人,但使用混合舌形发音/ɹ/的儿童则遵循了预期的语音语境模式。研究发现,RSSD患儿使用纯反折舌形的比例大大高于成人,而使用混合舌形的患儿并没有表现出预期的语音语境模式:这些研究结果表明,接受超声生物反馈治疗/ɹ/的患者可能对临床医生的反折形状提示反应最为敏感,至少在早期是这样。然而,仅对后鼻音进行提示可能是一种限制性的、不充分的策略,特别是考虑到我们的研究发现,经过补救的/ɹ/儿童在不同的语音环境中缺乏典型的变化。未来的研究应该更具体地追踪提示策略,以更好地了解临床医生的提示、舌形和在各种语境中的泛化之间的关系。补充材料:https://doi.org/10.23641/asha.26801050。
{"title":"An Articulatory Analysis of American English Rhotics in Children With and Without a History of Residual Speech Sound Disorder.","authors":"Amanda Eads, Heather Kabakoff, Hannah King, Jonathan L Preston, Tara McAllister","doi":"10.1044/2024_JSLHR-24-00037","DOIUrl":"10.1044/2024_JSLHR-24-00037","url":null,"abstract":"<p><strong>Purpose: </strong>This study investigated articulatory patterns for American English /ɹ/ in children with and without a history of residual speech sound disorder (RSSD). It was hypothesized that children without RSSD would favor bunched tongue shapes, similar to American adults reported in previous literature. Based on clinical cueing practices, it was hypothesized that children with RSSD might produce retroflex tongue shape patterns at a higher relative rate. Finally, it was hypothesized that, among children who use a mixture of bunched and retroflex shapes, phonetic context would impact tongue shape as reported in the adult literature.</p><p><strong>Method: </strong>These hypotheses were tested using ultrasound data from a stimulability task eliciting /ɹ/ in syllabic, postvocalic, and onset contexts. Participants were two groups of children/adolescents aged 9-15 years: 36 with RSSD who completed a study of ultrasound biofeedback treatment and 33 with no history of RSSD. Tongue shapes were qualitatively coded as bunched or retroflex using a flowchart from previous research.</p><p><strong>Results: </strong>Children with no history of RSSD were found to use bunched-only tongue shape patterns at a rate higher than adults, but those who used a mixture of shapes for /ɹ/ followed the expected phonetic contextual patterning. Children with RSSD were found to use retroflex-only patterns at a substantially higher rate than adults, and those using a mixture of shapes did not exhibit the expected patterning by phonetic context.</p><p><strong>Conclusions: </strong>These findings suggest that clients receiving ultrasound biofeedback treatment for /ɹ/ may be most responsive to clinician cueing of retroflex shapes, at least early on. However, retroflex-only cueing may be a limiting and insufficient strategy, particularly in light of our finding of a lack of typical variation across phonetic contexts in children with remediated /ɹ/. Future research should more specifically track cueing strategies to better understand the relationship between clinician cues, tongue shapes, and generalization across a range of contexts.</p><p><strong>Supplemental material: </strong>https://doi.org/10.23641/asha.26801050.</p>","PeriodicalId":51254,"journal":{"name":"Journal of Speech Language and Hearing Research","volume":" ","pages":"4246-4263"},"PeriodicalIF":4.6,"publicationDate":"2024-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11567108/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142480229","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
FluencyBank Timestamped: An Updated Data Set for Disfluency Detection and Automatic Intended Speech Recognition. 流利度数据库(FluencyBank)时间戳:用于流畅性检测和自动意图语音识别的最新数据集。
IF 2.2 2区 医学 Q1 AUDIOLOGY & SPEECH-LANGUAGE PATHOLOGY Pub Date : 2024-11-07 Epub Date: 2024-10-08 DOI: 10.1044/2024_JSLHR-24-00070
Amrit Romana, Minxue Niu, Matthew Perez, Emily Mower Provost

Purpose: This work introduces updated transcripts, disfluency annotations, and word timings for FluencyBank, which we refer to as FluencyBank Timestamped. This data set will enable the thorough analysis of how speech processing models (such as speech recognition and disfluency detection models) perform when evaluated with typical speech versus speech from people who stutter (PWS).

Method: We update the FluencyBank data set, which includes audio recordings from adults who stutter, to explore the robustness of speech processing models. Our update (semi-automated with manual review) includes new transcripts with timestamps and disfluency labels corresponding to each token in the transcript. Our disfluency labels capture typical disfluencies (filled pauses, repetitions, revisions, and partial words), and we explore how speech model performance compares for Switchboard (typical speech) and FluencyBank Timestamped. We present benchmarks for three speech tasks: intended speech recognition, text-based disfluency detection, and audio-based disfluency detection. For the first task, we evaluate how well Whisper performs for intended speech recognition (i.e., transcribing speech without disfluencies). For the next tasks, we evaluate how well a Bidirectional Embedding Representations from Transformers (BERT) text-based model and a Whisper audio-based model perform for disfluency detection. We select these models, BERT and Whisper, as they have shown high accuracies on a broad range of tasks in their language and audio domains, respectively.

Results: For the transcription task, we calculate an intended speech word error rate (isWER) between the model's output and the speaker's intended speech (i.e., speech without disfluencies). We find isWER is comparable between Switchboard and FluencyBank Timestamped, but that Whisper transcribes filled pauses and partial words at higher rates in the latter data set. Within FluencyBank Timestamped, isWER increases with stuttering severity. For the disfluency detection tasks, we find the models detect filled pauses, revisions, and partial words relatively well in FluencyBank Timestamped, but performance drops substantially for repetitions because the models are unable to generalize to the different types of repetitions (e.g., multiple repetitions and sound repetitions) from PWS. We hope that FluencyBank Timestamped will allow researchers to explore closing performance gaps between typical speech and speech from PWS.

Conclusions: Our analysis shows that there are gaps in speech recognition and disfluency detection performance between typical speech and speech from PWS. We hope that FluencyBank Timestamped will contribute to more advancements in training robust speech processing models.

目的:这项工作介绍了流利说数据库的最新录音誊本、不流利注释和单词定时,我们称之为流利说数据库时间戳。该数据集将有助于全面分析语音处理模型(如语音识别和不流利检测模型)在评估典型语音和口吃患者(PWS)语音时的表现:我们更新了 FluencyBank 数据集,其中包括口吃成年人的录音,以探索语音处理模型的稳健性。我们的更新(半自动化,人工审核)包括带有时间戳的新记录誊本和与记录誊本中每个标记相对应的不流利标签。我们的不流畅标签捕捉了典型的不流畅现象(填充停顿、重复、修改和部分词语),我们探讨了 Switchboard(典型语音)和 FluencyBank Timestamped 的语音模型性能比较。我们提供了三项语音任务的基准:意图语音识别、基于文本的不流畅检测和基于音频的不流畅检测。在第一项任务中,我们评估了 Whisper 在预期语音识别(即转录无断句语音)方面的表现。在接下来的任务中,我们将评估基于转换器的双向嵌入表征(BERT)文本模型和基于 Whisper 音频模型在不流畅语句检测中的表现。我们选择 BERT 和 Whisper 这两个模型,是因为它们分别在其语言和音频领域的大量任务中表现出了很高的准确率:在转录任务中,我们计算了模型输出与说话人预期语音(即不含不流利词语的语音)之间的预期语音词语错误率(isWER)。我们发现,Switchboard 和 FluencyBank Timestamped 的 isWER 不相上下,但在后者的数据集中,Whisper 转录填充停顿和不完整单词的比率更高。在 FluencyBank Timestamped 中,isWER 会随着口吃严重程度的增加而增加。对于不流利检测任务,我们发现在流利库 Timestamped 中,模型对填充停顿、修订和部分词语的检测效果相对较好,但对重复的检测效果则大幅下降,因为模型无法泛化到 PWS 中不同类型的重复(如多重重复和声音重复)。我们希望,FluencyBank Timestamped 能让研究人员探索如何缩小典型语音和 PWS 语音之间的性能差距:我们的分析表明,典型语音和来自 PWS 的语音在语音识别和不流畅检测性能方面存在差距。我们希望,FluencyBank Timestamped 将有助于在训练强大的语音处理模型方面取得更多进展。
{"title":"FluencyBank Timestamped: An Updated Data Set for Disfluency Detection and Automatic Intended Speech Recognition.","authors":"Amrit Romana, Minxue Niu, Matthew Perez, Emily Mower Provost","doi":"10.1044/2024_JSLHR-24-00070","DOIUrl":"10.1044/2024_JSLHR-24-00070","url":null,"abstract":"<p><strong>Purpose: </strong>This work introduces updated transcripts, disfluency annotations, and word timings for FluencyBank, which we refer to as FluencyBank Timestamped. This data set will enable the thorough analysis of how speech processing models (such as speech recognition and disfluency detection models) perform when evaluated with typical speech versus speech from people who stutter (PWS).</p><p><strong>Method: </strong>We update the FluencyBank data set, which includes audio recordings from adults who stutter, to explore the robustness of speech processing models. Our update (semi-automated with manual review) includes new transcripts with timestamps and disfluency labels corresponding to each token in the transcript. Our disfluency labels capture typical disfluencies (filled pauses, repetitions, revisions, and partial words), and we explore how speech model performance compares for Switchboard (typical speech) and FluencyBank Timestamped. We present benchmarks for three speech tasks: intended speech recognition, text-based disfluency detection, and audio-based disfluency detection. For the first task, we evaluate how well Whisper performs for intended speech recognition (i.e., transcribing speech without disfluencies). For the next tasks, we evaluate how well a Bidirectional Embedding Representations from Transformers (BERT) text-based model and a Whisper audio-based model perform for disfluency detection. We select these models, BERT and Whisper, as they have shown high accuracies on a broad range of tasks in their language and audio domains, respectively.</p><p><strong>Results: </strong>For the transcription task, we calculate an intended speech word error rate (isWER) between the model's output and the speaker's intended speech (i.e., speech without disfluencies). We find isWER is comparable between Switchboard and FluencyBank Timestamped, but that Whisper transcribes filled pauses and partial words at higher rates in the latter data set. Within FluencyBank Timestamped, isWER increases with stuttering severity. For the disfluency detection tasks, we find the models detect filled pauses, revisions, and partial words relatively well in FluencyBank Timestamped, but performance drops substantially for repetitions because the models are unable to generalize to the different types of repetitions (e.g., multiple repetitions and sound repetitions) from PWS. We hope that FluencyBank Timestamped will allow researchers to explore closing performance gaps between typical speech and speech from PWS.</p><p><strong>Conclusions: </strong>Our analysis shows that there are gaps in speech recognition and disfluency detection performance between typical speech and speech from PWS. We hope that FluencyBank Timestamped will contribute to more advancements in training robust speech processing models.</p>","PeriodicalId":51254,"journal":{"name":"Journal of Speech Language and Hearing Research","volume":" ","pages":"4203-4215"},"PeriodicalIF":2.2,"publicationDate":"2024-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142395056","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Imitation of Multisyllabic Items by Children With Developmental Language Disorder: Evidence for Word-Level Atypical Speech Envelope and Pitch Contours. 发育性语言障碍儿童对多音节词项的模仿:单词级非典型语音包络和音高轮廓的证据。
IF 2.2 2区 医学 Q1 AUDIOLOGY & SPEECH-LANGUAGE PATHOLOGY Pub Date : 2024-11-07 Epub Date: 2024-10-11 DOI: 10.1044/2024_JSLHR-24-00031
Lyla Parvez, Mahmoud Keshavarzi, Susan Richards, Giovanni M Di Liberto, Usha Goswami

Purpose: Developmental language disorder (DLD) is a multifaceted disorder. Recently, interest has grown in prosodic aspects of DLD, but most investigations of possible prosodic causes focus on speech perception tasks. Here, we focus on speech production from a speech amplitude envelope (AE) perspective. Perceptual studies have indicated a role for difficulties in AE processing in DLD related to sensory/neural processing of prosody. We explore possible matching AE difficulties in production.

Method: Fifty-seven children with and without DLD completed a computerized imitation task, copying aloud 30 familiar targets such as "alligator." Children with DLD (n = 20) were compared with typically developing children (age-matched controls [AMC], n = 21) and younger language controls (YLC, n = 16). Similarity of the child's productions to the target in terms of the continuous AE and pitch contour was computed using two similarity metrics, correlation, and mutual information. Both the speech AE and the pitch contour contain important information about stress patterning and intonational information over time.

Results: Children with DLD showed significantly reduced imitation for both the AE and pitch contour metrics compared to AMC children. The opportunity to repeat the targets had no impact on performance for any group. Word length effects were similar across groups.

Conclusions: The spoken production of multisyllabic words by children with DLD is atypical regarding both the AE and the pitch contour. This is consistent with a theoretical explanation of DLD based on impaired sensory/neural processing of low-frequency (slow) amplitude and frequency modulations, as predicted by the temporal sampling theory.

Supplemental material: https://doi.org/10.23641/asha.27165690.

目的:发育性语言障碍(DLD)是一种多方面的障碍。最近,人们对 DLD 的前音方面越来越感兴趣,但对可能的前音原因的调查大多集中在语音感知任务上。在这里,我们从语音振幅包络(AE)的角度来关注语音的产生。感知研究表明,AE 处理困难在 DLD 中的作用与前奏的感觉/神经处理有关。我们探讨了可能与 AE 相匹配的发音困难:方法:57 名患有和未患有 DLD 的儿童完成了一项计算机化模仿任务,大声模仿 30 个熟悉的目标,如 "鳄鱼"。患有 DLD 的儿童(n = 20)与发育正常的儿童(年龄匹配对照组 [AMC], n = 21)和低龄语言对照组(YLC, n = 16)进行了比较。在连续 AE 和音高轮廓方面,使用相关性和互信息这两个相似度量来计算儿童与目标的相似度。语音 AE 和音高轮廓都包含有关重音模式和语调信息的重要信息:结果:与 AMC 儿童相比,DLD 儿童在 AE 和音高轮廓指标上的模仿能力明显下降。重复目标的机会对各组儿童的表现均无影响。单词长度对各组的影响相似:结论:DLD 儿童多音节词的口语发音在 AE 和音高轮廓方面都不典型。这符合 DLD 的理论解释,即低频(慢速)振幅和频率调制的感觉/神经处理受损,正如时间采样理论所预测的那样。补充材料:https://doi.org/10.23641/asha.27165690。
{"title":"Imitation of Multisyllabic Items by Children With Developmental Language Disorder: Evidence for Word-Level Atypical Speech Envelope and Pitch Contours.","authors":"Lyla Parvez, Mahmoud Keshavarzi, Susan Richards, Giovanni M Di Liberto, Usha Goswami","doi":"10.1044/2024_JSLHR-24-00031","DOIUrl":"10.1044/2024_JSLHR-24-00031","url":null,"abstract":"<p><strong>Purpose: </strong>Developmental language disorder (DLD) is a multifaceted disorder. Recently, interest has grown in prosodic aspects of DLD, but most investigations of possible prosodic causes focus on speech perception tasks. Here, we focus on speech production from a speech amplitude envelope (AE) perspective. Perceptual studies have indicated a role for difficulties in AE processing in DLD related to sensory/neural processing of prosody. We explore possible matching AE difficulties in production.</p><p><strong>Method: </strong>Fifty-seven children with and without DLD completed a computerized imitation task, copying aloud 30 familiar targets such as \"alligator.\" Children with DLD (<i>n</i> = 20) were compared with typically developing children (age-matched controls [AMC], <i>n</i> = 21) and younger language controls (YLC, <i>n</i> = 16). Similarity of the child's productions to the target in terms of the continuous AE and pitch contour was computed using two similarity metrics, correlation, and mutual information. Both the speech AE and the pitch contour contain important information about stress patterning and intonational information over time.</p><p><strong>Results: </strong>Children with DLD showed significantly reduced imitation for both the AE and pitch contour metrics compared to AMC children. The opportunity to repeat the targets had no impact on performance for any group. Word length effects were similar across groups.</p><p><strong>Conclusions: </strong>The spoken production of multisyllabic words by children with DLD is atypical regarding both the AE and the pitch contour. This is consistent with a theoretical explanation of DLD based on impaired sensory/neural processing of low-frequency (slow) amplitude and frequency modulations, as predicted by the temporal sampling theory.</p><p><strong>Supplemental material: </strong>https://doi.org/10.23641/asha.27165690.</p>","PeriodicalId":51254,"journal":{"name":"Journal of Speech Language and Hearing Research","volume":" ","pages":"4288-4303"},"PeriodicalIF":2.2,"publicationDate":"2024-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142407205","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Sharing Stories Versus Explaining Facts: Comparing African American Children's Microstructure Performance Across Fictional Narrative, Informational, and Procedural Discourse. 分享故事与解释事实:比较非裔美国儿童在小说叙事、信息和程序性话语中的微观结构表现。
IF 2.2 2区 医学 Q1 AUDIOLOGY & SPEECH-LANGUAGE PATHOLOGY Pub Date : 2024-11-07 Epub Date: 2024-10-11 DOI: 10.1044/2024_JSLHR-23-00579
Nicole Gardner-Neblett, Dulce Lopez Alvarez

Purpose: Both fictional oral narrative and expository oral discourse skills are critical language competencies that support children's academic success. Few studies, however, have examined African American children's microstructure performance across these genres. To address this gap in the literature, the study compared African American children's microstructure productivity and complexity across three discourse contexts: fictional narratives, informational discourse, and procedural discourse. The study also examined whether there were age-related differences in microstructure performance by discourse type.

Method: Participants were 130 typically developing African American children, aged 59-95 months old, enrolled in kindergarten through second grades in a Midwestern U.S. public school district. Wordless children's books were used to elicit fictional narratives, informational, and procedural discourse. Indicators of microstructure performance included measures of productivity (i.e., number of total words and number of different words) and complexity (i.e., mean length of communication unit and complex syntax rate). The effects of genre and age on microstructure performance were assessed using linear mixed-effects regression models.

Results: Children produced longer discourse and used a greater diversity of words for their fictional stories compared to their informational or procedural discourse. Grammatical complexity was greater for fictional narratives and procedural discourse than informational discourse. Results showed greater productivity and complexity among older children compared to younger children, particularly for fictional and informational discourse.

Conclusions: African American children exhibit variation in their microstructure performance by discourse context and age. Understanding this variation is key to providing African American children with support to maximize their oral language competencies.

目的:虚构性口头叙述和说明性口头话语技能都是支持儿童学业成功的关键语言能力。然而,很少有研究考察非裔美国儿童在这些体裁中的微观结构表现。为了填补这一文献空白,本研究比较了非裔美国儿童在虚构叙事、信息性话语和程序性话语这三种话语语境中的微观结构效率和复杂性。研究还考察了不同话语类型的微观结构表现是否存在与年龄相关的差异:研究对象是 130 名发育典型的非裔美国儿童,年龄在 59-95 个月之间,就读于美国中西部一个公立学校的幼儿园至二年级。无字童书被用来引出虚构叙事、信息和程序性话语。微观结构表现的指标包括生产率(即总字数和不同字数)和复杂性(即交流单元的平均长度和复杂句法率)。我们使用线性混合效应回归模型评估了体裁和年龄对微观结构表现的影响:结果:与信息性或程序性话语相比,儿童在虚构故事中的话语篇幅更长,使用的词语种类更多。虚构故事和程序性话语的语法复杂性高于信息性话语。结果表明,与年龄较小的儿童相比,年龄较大的儿童的写作效率和复杂性更高,尤其是在虚构和信息性话语方面:结论:非裔美国儿童的微观结构表现因话语语境和年龄而异。了解这种差异是为非裔美国儿童提供支持以最大限度地提高其口语能力的关键。
{"title":"Sharing Stories Versus Explaining Facts: Comparing African American Children's Microstructure Performance Across Fictional Narrative, Informational, and Procedural Discourse.","authors":"Nicole Gardner-Neblett, Dulce Lopez Alvarez","doi":"10.1044/2024_JSLHR-23-00579","DOIUrl":"10.1044/2024_JSLHR-23-00579","url":null,"abstract":"<p><strong>Purpose: </strong>Both fictional oral narrative and expository oral discourse skills are critical language competencies that support children's academic success. Few studies, however, have examined African American children's microstructure performance across these genres. To address this gap in the literature, the study compared African American children's microstructure productivity and complexity across three discourse contexts: fictional narratives, informational discourse, and procedural discourse. The study also examined whether there were age-related differences in microstructure performance by discourse type.</p><p><strong>Method: </strong>Participants were 130 typically developing African American children, aged 59-95 months old, enrolled in kindergarten through second grades in a Midwestern U.S. public school district. Wordless children's books were used to elicit fictional narratives, informational, and procedural discourse. Indicators of microstructure performance included measures of productivity (i.e., number of total words and number of different words) and complexity (i.e., mean length of communication unit and complex syntax rate). The effects of genre and age on microstructure performance were assessed using linear mixed-effects regression models.</p><p><strong>Results: </strong>Children produced longer discourse and used a greater diversity of words for their fictional stories compared to their informational or procedural discourse. Grammatical complexity was greater for fictional narratives and procedural discourse than informational discourse. Results showed greater productivity and complexity among older children compared to younger children, particularly for fictional and informational discourse.</p><p><strong>Conclusions: </strong>African American children exhibit variation in their microstructure performance by discourse context and age. Understanding this variation is key to providing African American children with support to maximize their oral language competencies.</p>","PeriodicalId":51254,"journal":{"name":"Journal of Speech Language and Hearing Research","volume":" ","pages":"4431-4445"},"PeriodicalIF":2.2,"publicationDate":"2024-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142407206","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Neural Decoding of Spontaneous Overt and Intended Speech. 自发言语和有意言语的神经解码
IF 2.2 2区 医学 Q1 AUDIOLOGY & SPEECH-LANGUAGE PATHOLOGY Pub Date : 2024-11-07 Epub Date: 2024-08-06 DOI: 10.1044/2024_JSLHR-24-00046
Debadatta Dash, Paul Ferrari, Jun Wang

Purpose: The aim of this study was to decode intended and overt speech from neuromagnetic signals while the participants performed spontaneous overt speech tasks without cues or prompts (stimuli).

Method: Magnetoencephalography (MEG), a noninvasive neuroimaging technique, was used to collect neural signals from seven healthy adult English speakers performing spontaneous, overt speech tasks. The participants randomly spoke the words yes or no at a self-paced rate without cues. Two machine learning models, namely, linear discriminant analysis (LDA) and one-dimensional convolutional neural network (1D CNN), were employed to classify the two words from the recorded MEG signals.

Results: LDA and 1D CNN achieved average decoding accuracies of 79.02% and 90.40%, respectively, in decoding overt speech, significantly surpassing the chance level (50%). The accuracy for decoding intended speech was 67.19% using 1D CNN.

Conclusions: This study showcases the possibility of decoding spontaneous overt and intended speech directly from neural signals in the absence of perceptual interference. We believe that these findings make a steady step toward the future spontaneous speech-based brain-computer interface.

目的:本研究旨在从神经磁信号中解码意向性和公开性言语,当时参与者正在执行没有提示或提示(刺激)的自发公开言语任务:脑磁图(MEG)是一种无创神经成像技术,研究人员使用该技术收集了七名健康的成年英语使用者在执行自发、公开言语任务时发出的神经信号。参与者在没有提示的情况下以自定节奏随机说出 "是 "或 "否"。研究人员采用两种机器学习模型,即线性判别分析(LDA)和一维卷积神经网络(1D CNN),对记录的 MEG 信号中的两个单词进行分类:在解码明显语音时,LDA 和一维卷积神经网络的平均解码准确率分别为 79.02% 和 90.40%,大大超过了偶然水平(50%)。使用一维 CNN 对意图语音的解码准确率为 67.19%:本研究展示了在没有知觉干扰的情况下,直接从神经信号解码自发公开语音和意图语音的可能性。我们相信,这些发现为未来基于自发语音的脑机接口迈出了坚实的一步。
{"title":"Neural Decoding of Spontaneous Overt and Intended Speech.","authors":"Debadatta Dash, Paul Ferrari, Jun Wang","doi":"10.1044/2024_JSLHR-24-00046","DOIUrl":"10.1044/2024_JSLHR-24-00046","url":null,"abstract":"<p><strong>Purpose: </strong>The aim of this study was to decode intended and overt speech from neuromagnetic signals while the participants performed spontaneous overt speech tasks without cues or prompts (stimuli).</p><p><strong>Method: </strong>Magnetoencephalography (MEG), a noninvasive neuroimaging technique, was used to collect neural signals from seven healthy adult English speakers performing spontaneous, overt speech tasks. The participants randomly spoke the words yes or no at a self-paced rate without cues. Two machine learning models, namely, linear discriminant analysis (LDA) and one-dimensional convolutional neural network (1D CNN), were employed to classify the two words from the recorded MEG signals.</p><p><strong>Results: </strong>LDA and 1D CNN achieved average decoding accuracies of 79.02% and 90.40%, respectively, in decoding overt speech, significantly surpassing the chance level (50%). The accuracy for decoding intended speech was 67.19% using 1D CNN.</p><p><strong>Conclusions: </strong>This study showcases the possibility of decoding spontaneous overt and intended speech directly from neural signals in the absence of perceptual interference. We believe that these findings make a steady step toward the future spontaneous speech-based brain-computer interface.</p>","PeriodicalId":51254,"journal":{"name":"Journal of Speech Language and Hearing Research","volume":" ","pages":"4216-4225"},"PeriodicalIF":2.2,"publicationDate":"2024-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141898935","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Conflict Adaptation in Aphasia: Upregulating Cognitive Control for Improved Sentence Comprehension. 失语症的冲突适应:上调认知控制以改善句子理解。
IF 4.6 2区 医学 Q1 AUDIOLOGY & SPEECH-LANGUAGE PATHOLOGY Pub Date : 2024-11-07 Epub Date: 2024-10-08 DOI: 10.1044/2024_JSLHR-23-00768
Anna Krason, Erica L Middleton, Matthew E P Ambrogi, Malathi Thothathiri

Purpose: This study investigated conflict adaptation in aphasia, specifically whether upregulating cognitive control improves sentence comprehension.

Method: Four individuals with mild aphasia completed four eye tracking sessions with interleaved auditory Stroop and sentence-to-picture matching trials (critical and filler sentences). Auditory Stroop congruency (congruent/incongruent across a male/female voice saying "boy"/"girl") was crossed with sentence congruency (syntactically correct sentences that are semantically plausible/implausible), resulting in four experimental conditions (congruent auditory Stroop followed by incongruent sentence [CI], incongruent auditory Stroop followed by incongruent sentence [II], congruent auditory Stroop followed by congruent sentence [CC], and incongruent auditory Stroop followed by congruent sentence [IC]). Critical sentences were always preceded by auditory Stroop trials. At the end of each session, a five-item questionnaire was administered to assess overall well-being and fatigue. We conducted individual-level mixed-effects regressions on reaction times and growth curve analyses on the proportion of eye fixations to target pictures during incongruent sentences.

Results: One participant showed conflict adaptation indicated by faster reaction times on active sentences and more rapid growth in fixations to target pictures on passive sentences in the II condition compared to the CI condition. Incongruent auditory Stroop also modulated active-sentence processing in an additional participant, as indicated by eye movements.

Conclusions: This is the first study to observe conflict adaptation in sentence comprehension in people with aphasia. The extent of adaptation varied across individuals. Eye tracking revealed subtler effects than overt behavioral measures. The results extend the study of conflict adaptation beyond neurotypical adults and suggest that upregulating cognitive control may be a potential treatment avenue for some individuals with aphasia.

Supplemental material: https://doi.org/10.23641/asha.27056149.

目的:本研究调查了失语症的冲突适应,特别是上调认知控制是否能改善句子理解:方法:四名轻度失语症患者完成了四次眼动追踪训练,其中包括交错的听觉 Stroop 和句子与图片匹配试验(关键句和填充句)。听觉 Stroop 一致性(男声/女声说 "男孩"/"女孩 "时的一致/不一致)与句子一致性(语法正确且语义合理/不合理的句子)交叉进行、结果产生了四种实验条件(听觉 Stroop 一致后出现不一致句子 [CI];听觉 Stroop 不一致后出现不一致句子 [II];听觉 Stroop 一致后出现一致句子 [CC];听觉 Stroop 不一致后出现一致句子 [IC])。临界句子之前总是先进行听觉 Stroop 试验。每次训练结束后,我们都会发放一份包含五个项目的调查问卷,以评估总体健康状况和疲劳程度。我们对反应时间进行了个体水平的混合效应回归,并对不一致句子中眼睛注视目标图片的比例进行了增长曲线分析:结果:与 CI 条件相比,一名受试者在 II 条件下对主动句子的反应时间更快,对被动句子的目标图片的注视增长更快,这表明受试者已经适应了冲突。不一致的听觉 Stroop 也调节了另一名被试的主动句子加工,这一点可以从眼球运动中看出:这是第一项观察失语症患者在句子理解中的冲突适应的研究。适应的程度因人而异。与明显的行为测量相比,眼动追踪显示出更微妙的效果。研究结果将对冲突适应的研究扩展到了神经畸形成人之外,并表明上调认知控制可能是治疗某些失语症患者的潜在途径。补充材料:https://doi.org/10.23641/asha.27056149.
{"title":"Conflict Adaptation in Aphasia: Upregulating Cognitive Control for Improved Sentence Comprehension.","authors":"Anna Krason, Erica L Middleton, Matthew E P Ambrogi, Malathi Thothathiri","doi":"10.1044/2024_JSLHR-23-00768","DOIUrl":"10.1044/2024_JSLHR-23-00768","url":null,"abstract":"<p><strong>Purpose: </strong>This study investigated conflict adaptation in aphasia, specifically whether upregulating cognitive control improves sentence comprehension.</p><p><strong>Method: </strong>Four individuals with mild aphasia completed four eye tracking sessions with interleaved auditory Stroop and sentence-to-picture matching trials (critical and filler sentences). Auditory Stroop congruency (congruent/incongruent across a male/female voice saying \"boy\"/\"girl\") was crossed with sentence congruency (syntactically correct sentences that are semantically plausible/implausible), resulting in four experimental conditions (congruent auditory Stroop followed by incongruent sentence [CI], incongruent auditory Stroop followed by incongruent sentence [II], congruent auditory Stroop followed by congruent sentence [CC], and incongruent auditory Stroop followed by congruent sentence [IC]). Critical sentences were always preceded by auditory Stroop trials. At the end of each session, a five-item questionnaire was administered to assess overall well-being and fatigue. We conducted individual-level mixed-effects regressions on reaction times and growth curve analyses on the proportion of eye fixations to target pictures during incongruent sentences.</p><p><strong>Results: </strong>One participant showed conflict adaptation indicated by faster reaction times on active sentences and more rapid growth in fixations to target pictures on passive sentences in the II condition compared to the CI condition. Incongruent auditory Stroop also modulated active-sentence processing in an additional participant, as indicated by eye movements.</p><p><strong>Conclusions: </strong>This is the first study to observe conflict adaptation in sentence comprehension in people with aphasia. The extent of adaptation varied across individuals. Eye tracking revealed subtler effects than overt behavioral measures. The results extend the study of conflict adaptation beyond neurotypical adults and suggest that upregulating cognitive control may be a potential treatment avenue for some individuals with aphasia.</p><p><strong>Supplemental material: </strong>https://doi.org/10.23641/asha.27056149.</p>","PeriodicalId":51254,"journal":{"name":"Journal of Speech Language and Hearing Research","volume":" ","pages":"4411-4430"},"PeriodicalIF":4.6,"publicationDate":"2024-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11567075/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142395055","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Divided Attention Has Limited Effects on Speech Sensorimotor Control. 注意力分散对言语感觉运动控制的影响有限
IF 4.6 2区 医学 Q1 AUDIOLOGY & SPEECH-LANGUAGE PATHOLOGY Pub Date : 2024-11-07 Epub Date: 2024-10-17 DOI: 10.1044/2024_JSLHR-24-00098
Jenna Krakauer, Chris Naber, Caroline A Niziolek, Benjamin Parrell

Purpose: When vowel formants are externally perturbed, speakers change their production to oppose that perturbation both during the ongoing production (compensation) and in future productions (adaptation). To date, attempts to explain the large variability across individuals in these responses have focused on trait-based characteristics such as auditory acuity, but evidence from other motor domains suggests that attention may modulate the motor response to sensory perturbations. Here, we test the extent to which divided attention impacts sensorimotor control for supralaryngeal articulation.

Method: Neurobiologically healthy speakers were exposed to random (Experiment 1) or consistent (Experiment 2) real-time auditory perturbation of vowel formants to measure online compensation and trial-to-trial adaptation, respectively. In both experiments, participants completed two conditions: one with a simultaneous visual distractor task to divide attention and one without this secondary task.

Results: Divided visual attention slightly reduced online compensation, but only starting > 300 ms after vowel onset, well beyond the typical duration of vowels in speech. Divided attention had no effect on adaptation.

Conclusions: The results from both experiments suggest that the use of sensory feedback in typical speech motor control is a largely automatic process unaffected by divided visual attention, suggesting that the source of cross-speaker variability in response to formant perturbations likely lies within the speech production system rather than in higher-level cognitive processes. Methodologically, these results suggest that compensation for formant perturbations should be measured prior to 300 ms after vowel onset to avoid any potential impact of attention or other higher-order cognitive factors.

目的:当元音的声母受到外部干扰时,说话者会在正在进行的发音(补偿)和未来的发音(适应)中改变发音以对抗这种干扰。迄今为止,试图解释这些反应中个体间巨大差异的方法主要集中在基于特征的特性上,如听觉敏锐度,但来自其他运动领域的证据表明,注意力可能会调节对感觉扰动的运动反应。在此,我们测试了分散注意力对喉上发音的感觉运动控制的影响程度:方法:让神经生物学健康的说话者暴露于随机(实验 1)或持续(实验 2)的元音声母实时听觉扰动中,分别测量在线补偿和试验到试验的适应性。在这两项实验中,被试完成了两个条件:一个条件是同时进行视觉干扰任务以分散注意力,另一个条件是不进行视觉干扰任务:结果:视觉注意力的分散略微降低了在线补偿,但只从元音开始后 300 毫秒开始,远远超过了语音中元音的典型持续时间。分散注意对适应没有影响:这两项实验的结果表明,在典型的语音运动控制中,使用感觉反馈是一个基本不受视觉注意力分散影响的自动过程,这表明,不同说话者对声调扰动的反应差异的根源可能在于语音生成系统,而不是更高层次的认知过程。从方法上讲,这些结果表明,应在元音开始后 300 毫秒之前测量对声母扰动的补偿,以避免注意力或其他高阶认知因素的潜在影响。
{"title":"Divided Attention Has Limited Effects on Speech Sensorimotor Control.","authors":"Jenna Krakauer, Chris Naber, Caroline A Niziolek, Benjamin Parrell","doi":"10.1044/2024_JSLHR-24-00098","DOIUrl":"10.1044/2024_JSLHR-24-00098","url":null,"abstract":"<p><strong>Purpose: </strong>When vowel formants are externally perturbed, speakers change their production to oppose that perturbation both during the ongoing production (compensation) and in future productions (adaptation). To date, attempts to explain the large variability across individuals in these responses have focused on trait-based characteristics such as auditory acuity, but evidence from other motor domains suggests that attention may modulate the motor response to sensory perturbations. Here, we test the extent to which divided attention impacts sensorimotor control for supralaryngeal articulation.</p><p><strong>Method: </strong>Neurobiologically healthy speakers were exposed to random (Experiment 1) or consistent (Experiment 2) real-time auditory perturbation of vowel formants to measure online compensation and trial-to-trial adaptation, respectively. In both experiments, participants completed two conditions: one with a simultaneous visual distractor task to divide attention and one without this secondary task.</p><p><strong>Results: </strong>Divided visual attention slightly reduced online compensation, but only starting > 300 ms after vowel onset, well beyond the typical duration of vowels in speech. Divided attention had no effect on adaptation.</p><p><strong>Conclusions: </strong>The results from both experiments suggest that the use of sensory feedback in typical speech motor control is a largely automatic process unaffected by divided visual attention, suggesting that the source of cross-speaker variability in response to formant perturbations likely lies within the speech production system rather than in higher-level cognitive processes. Methodologically, these results suggest that compensation for formant perturbations should be measured prior to 300 ms after vowel onset to avoid any potential impact of attention or other higher-order cognitive factors.</p>","PeriodicalId":51254,"journal":{"name":"Journal of Speech Language and Hearing Research","volume":" ","pages":"4358-4368"},"PeriodicalIF":4.6,"publicationDate":"2024-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11567081/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142480232","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Language Profiles of School-Age Children With 16p11.2 Copy Number Variants in a Clinically Ascertained Cohort. 在临床确定的队列中,16p11.2拷贝数变异的学龄儿童的语言特征。
IF 4.6 2区 医学 Q1 AUDIOLOGY & SPEECH-LANGUAGE PATHOLOGY Pub Date : 2024-11-07 Epub Date: 2024-10-17 DOI: 10.1044/2024_JSLHR-24-00257
Jente Verbesselt, Jeroen Breckpot, Inge Zink, Ann Swillen

Purpose: Individuals with proximal 16p11.2 copy number variants (CNVs), either deletions (16p11.2DS) or duplications (16p11.2Dup), are predisposed to neurodevelopmental difficulties and disorders, such as language disorders, intellectual disability, and autism spectrum disorder. The purpose of the current study was to characterize language profiles of school-age children with proximal 16p11.2 CNVs, in relation to the normative sample and unaffected siblings of children with 16p11.2DS.

Method: Standardized language tests were conducted in 33 school-age children with BP4-BP5 16p11.2 CNVs and eight unaffected siblings of children with 16p11.2DS to evaluate language production and comprehension skills across various language domains. A standardized intelligence test was also administered, and parents completed a standardized questionnaire to assess autistic traits. Language profiles were compared across 16p11.2 CNVs and intrafamilial pairs. The influence of nonverbal intelligence and autistic traits on language outcomes was investigated.

Results: No significant differences were found between children with 16p11.2DS and those with 16p11.2Dup, although both groups exhibited significantly poorer language skills compared to the normative sample and unaffected siblings of children with 16p11.2DS. Severe language deficits were identified in 70% of individuals with 16p11.2 CNVs across all language subdomains, with significantly better receptive vocabulary skills than overall receptive language abilities. In children with 16p11.2DS, expressive language deficits were more pronounced than receptive deficits. In contrast, only in children with 16p11.2Dup did nonverbal intelligence influence their language outcomes.

Conclusions: The current study contributes to the deeper understanding of language profiles in 16p11.2 CNVs in a clinically ascertained cohort, indicating generalized deficits across multiple language domains, rather than a syndrome-specific pattern targeting specific subdomains. The findings underscore the importance of early diagnosis, targeted therapy, and monitoring of language skills in children with 16p11.2 CNVs.

Supplemental material: https://doi.org/10.23641/asha.27228702.

目的:16p11.2近端拷贝数变异(CNVs),即缺失(16p11.2DS)或重复(16p11.2Dup)的个体易患神经发育障碍和疾病,如语言障碍、智力障碍和自闭症谱系障碍。本研究的目的是分析近端 16p11.2 CNVs 学龄儿童的语言特征,并与常模样本和未受影响的 16p11.2DS 儿童兄弟姐妹进行比较:方法:对 33 名患有 BP4-BP5 16p11.2 CNV 的学龄儿童和 8 名未受 16p11.2DS 影响的兄弟姐妹进行了标准化语言测试,以评估各语言领域的语言生成和理解能力。此外,还进行了标准化智力测验,家长填写了标准化问卷以评估自闭症特征。比较了不同 16p11.2 CNV 和家族内配对的语言特征。研究还探讨了非语言智能和自闭症特征对语言结果的影响:结果:16p11.2DS患儿与16p11.2Dup患儿之间没有发现明显差异,但与常模样本和16p11.2DS患儿未受影响的兄弟姐妹相比,两组患儿的语言能力都明显较差。70% 的 16p11.2 CNVs 患儿在所有语言亚领域都存在严重的语言障碍,接受词汇能力明显优于整体接受语言能力。在 16p11.2DS 儿童中,表达性语言障碍比接受性语言障碍更为明显。相比之下,只有 16p11.2Dup 患儿的非语言智能才会影响其语言结果:目前的研究有助于加深对16p11.2 CNVs临床确诊队列中语言特征的理解,研究结果表明,16p11.2 CNVs可导致多个语言领域的普遍缺陷,而不是针对特定子领域的综合征模式。这些发现强调了早期诊断、针对性治疗和监测16p11.2 CNVs患儿语言技能的重要性。补充材料:https://doi.org/10.23641/asha.27228702。
{"title":"Language Profiles of School-Age Children With 16p11.2 Copy Number Variants in a Clinically Ascertained Cohort.","authors":"Jente Verbesselt, Jeroen Breckpot, Inge Zink, Ann Swillen","doi":"10.1044/2024_JSLHR-24-00257","DOIUrl":"10.1044/2024_JSLHR-24-00257","url":null,"abstract":"<p><strong>Purpose: </strong>Individuals with proximal 16p11.2 copy number variants (CNVs), either deletions (16p11.2DS) or duplications (16p11.2Dup), are predisposed to neurodevelopmental difficulties and disorders, such as language disorders, intellectual disability, and autism spectrum disorder. The purpose of the current study was to characterize language profiles of school-age children with proximal 16p11.2 CNVs, in relation to the normative sample and unaffected siblings of children with 16p11.2DS.</p><p><strong>Method: </strong>Standardized language tests were conducted in 33 school-age children with BP4-BP5 16p11.2 CNVs and eight unaffected siblings of children with 16p11.2DS to evaluate language production and comprehension skills across various language domains. A standardized intelligence test was also administered, and parents completed a standardized questionnaire to assess autistic traits. Language profiles were compared across 16p11.2 CNVs and intrafamilial pairs. The influence of nonverbal intelligence and autistic traits on language outcomes was investigated.</p><p><strong>Results: </strong>No significant differences were found between children with 16p11.2DS and those with 16p11.2Dup, although both groups exhibited significantly poorer language skills compared to the normative sample and unaffected siblings of children with 16p11.2DS. Severe language deficits were identified in 70% of individuals with 16p11.2 CNVs across all language subdomains, with significantly better receptive vocabulary skills than overall receptive language abilities. In children with 16p11.2DS, expressive language deficits were more pronounced than receptive deficits. In contrast, only in children with 16p11.2Dup did nonverbal intelligence influence their language outcomes.</p><p><strong>Conclusions: </strong>The current study contributes to the deeper understanding of language profiles in 16p11.2 CNVs in a clinically ascertained cohort, indicating generalized deficits across multiple language domains, rather than a syndrome-specific pattern targeting specific subdomains. The findings underscore the importance of early diagnosis, targeted therapy, and monitoring of language skills in children with 16p11.2 CNVs.</p><p><strong>Supplemental material: </strong>https://doi.org/10.23641/asha.27228702.</p>","PeriodicalId":51254,"journal":{"name":"Journal of Speech Language and Hearing Research","volume":" ","pages":"4487-4503"},"PeriodicalIF":4.6,"publicationDate":"2024-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11567083/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142480234","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Retracted: The Relationship Between Oral and Written Language in Narrative Production by Arabic-Speaking Children: Fundamental Skills and Influences. 阿拉伯语儿童叙事创作中口头语言与书面语言之间的关系:基本技能和影响因素
IF 2.2 2区 医学 Q1 AUDIOLOGY & SPEECH-LANGUAGE PATHOLOGY Pub Date : 2024-11-07 Epub Date: 2024-07-09 DOI: 10.1044/2024_JSLHR-23-00717
Khaloob Kawar

Notice of retraction: https://doi.org/10.1044/2024_Nov2024ASHA.

Purpose: This study aims to investigate the relationship between oral and written language skills in narrative production among Arabic-speaking children, focusing on cognitive and linguistic abilities. It examines the differences in narrative parameters between oral and written narratives and explores the associations between these parameters and cognitive and linguistic skills.

Method: The research involved 237 sixth-grade Arabic-speaking students from low-socioeconomic status schools in Israel. Each participant was instructed to orally tell a narrative and to write another narrative based on two sets of six sequential pictures. Various narrative features were analyzed, including word count for length, type-token ratio (TTR) for lexical diversity, mean length of utterance (MLU) for morphosyntax, and number of episodes for macrostructure. Cognitive linguistic measures, including Raven's Progressive Matrices, reading comprehension (RC), and morphological awareness (MA) were also assessed.

Results: The study found significant differences between oral and written narratives regarding lexical diversity and macrostructure. Participants exhibited significantly higher TTR in written narratives compared to oral narratives, whereas the number of episodes was significantly higher in oral narratives than in written ones. However, no significant differences were observed in narrative length or MLU. Moreover, the study identified significant predictors for various aspects of written narratives, particularly MA and RC, which significantly predicted TTR, MLU, and macrostructure. Additionally, the inclusion of word count in oral narratives significantly enhanced the explained variance for narrative length and macrostructure in written language.

Conclusions: The results highlight the importance of the oral-written interface in both micro- and macrostructure representations in both oral and written modalities. They suggest that cognitive and linguistic skills, such as MA and RC, play a crucial role in narrative production. The findings have implications for educational practices and literacy outcomes in the Arab world, enhancing the understanding of the challenges and strategies involved in written language production among Arabic-speaking children.

目的:本研究旨在调查阿拉伯语儿童在叙事创作中口头和书面语言技能之间的关系,重点关注认知和语言能力。研究考察了口头和书面叙事在叙事参数上的差异,并探讨了这些参数与认知和语言能力之间的关联:研究涉及 237 名来自以色列社会经济地位较低学校的六年级阿拉伯语学生。每位受试者都被要求根据两组六幅连续的图片口头讲述一个故事,并写出另一个故事。我们分析了叙事的各种特征,包括长度方面的字数、词汇多样性方面的类型-单词比(TTR)、语法方面的平均语篇长度(MLU)和宏观结构方面的情节数。此外,还评估了认知语言学指标,包括瑞文渐进矩阵、阅读理解(RC)和形态意识(MA):研究发现,口头和书面叙事在词汇多样性和宏观结构方面存在明显差异。与口头叙事相比,参与者在书面叙事中表现出明显更高的 TTR,而口头叙事中的情节数量明显高于书面叙事。然而,在叙事长度或 MLU 方面没有观察到明显差异。此外,研究还发现了书面叙事各方面的重要预测因素,尤其是 MA 和 RC,它们能显著预测 TTR、MLU 和宏观结构。此外,在口头叙述中加入字数能显著提高书面语言中叙述长度和宏观结构的解释方差:研究结果凸显了口语和书面语界面在微观和宏观结构表征中的重要性。这些结果表明,认知和语言技能,如 MA 和 RC,在叙事制作中发挥着至关重要的作用。研究结果对阿拉伯世界的教育实践和扫盲成果具有影响,有助于加深对阿拉伯语儿童书面语言创作所面临的挑战和策略的理解。
{"title":"Retracted: The Relationship Between Oral and Written Language in Narrative Production by Arabic-Speaking Children: Fundamental Skills and Influences.","authors":"Khaloob Kawar","doi":"10.1044/2024_JSLHR-23-00717","DOIUrl":"10.1044/2024_JSLHR-23-00717","url":null,"abstract":"<p><strong>Notice of retraction: </strong>https://doi.org/10.1044/2024_Nov2024ASHA.</p><p><strong>Purpose: </strong>This study aims to investigate the relationship between oral and written language skills in narrative production among Arabic-speaking children, focusing on cognitive and linguistic abilities. It examines the differences in narrative parameters between oral and written narratives and explores the associations between these parameters and cognitive and linguistic skills.</p><p><strong>Method: </strong>The research involved 237 sixth-grade Arabic-speaking students from low-socioeconomic status schools in Israel. Each participant was instructed to orally tell a narrative and to write another narrative based on two sets of six sequential pictures. Various narrative features were analyzed, including word count for length, type-token ratio (TTR) for lexical diversity, mean length of utterance (MLU) for morphosyntax, and number of episodes for macrostructure. Cognitive linguistic measures, including Raven's Progressive Matrices, reading comprehension (RC), and morphological awareness (MA) were also assessed.</p><p><strong>Results: </strong>The study found significant differences between oral and written narratives regarding lexical diversity and macrostructure. Participants exhibited significantly higher TTR in written narratives compared to oral narratives, whereas the number of episodes was significantly higher in oral narratives than in written ones. However, no significant differences were observed in narrative length or MLU. Moreover, the study identified significant predictors for various aspects of written narratives, particularly MA and RC, which significantly predicted TTR, MLU, and macrostructure. Additionally, the inclusion of word count in oral narratives significantly enhanced the explained variance for narrative length and macrostructure in written language.</p><p><strong>Conclusions: </strong>The results highlight the importance of the oral-written interface in both micro- and macrostructure representations in both oral and written modalities. They suggest that cognitive and linguistic skills, such as MA and RC, play a crucial role in narrative production. The findings have implications for educational practices and literacy outcomes in the Arab world, enhancing the understanding of the challenges and strategies involved in written language production among Arabic-speaking children.</p>","PeriodicalId":51254,"journal":{"name":"Journal of Speech Language and Hearing Research","volume":" ","pages":"4534-4548"},"PeriodicalIF":2.2,"publicationDate":"2024-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141560310","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Journal of Speech Language and Hearing Research
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1