首页 > 最新文献

Journal of Speech Language and Hearing Research最新文献

英文 中文
Maternal Question Use Relates to Syntactic Skills in 5- to 7-Year-Old Children. 母亲提问与 5-7 岁儿童的句法技能有关。
IF 2.2 2区 医学 Q1 AUDIOLOGY & SPEECH-LANGUAGE PATHOLOGY Pub Date : 2024-12-09 Epub Date: 2024-10-30 DOI: 10.1044/2024_JSLHR-23-00426
Grace Buckalew, Alexus G Ramirez, Julie M Schneider

Purpose: This study examined how mothers' question-asking behavior relates to their child's syntactic skills. One important aspect of maternal question-asking behavior is the use of complex questions when speaking with children. These questions can differ based on both their purpose and structure. The purpose may be to seek out information, to teach, or to get a simple yes/no response. Questions may even be rhetorical, with no answer intended at all. Structurally, questions can include a wh-word (who, what, when, where, why, and how) or not; however, these wh-questions are important because they elicit utterances from the child and support vocabulary development. Despite wh-questions eliciting a response from children, it remains unknown how these questions relate to children's syntactic skills.

Method: Thirty-four mother-child dyads participated in a 15-min seminaturalistic play session. Children were between the ages of 5 and 7 years (M = 6.26 years, SD = 1.04 years; 20 girls/14 boys). The Diagnostic Evaluation of Language Variation (DELV) assessment was used to measure syntactic skills in children. Using the Systematic Analysis of Language Transcripts, questions were categorized based on structure (wh-questions vs. non-wh-questions) and purpose (information-seeking, pedagogical, or yes/no and rhetorical questions). A repeated-measures analysis of covariance and a linear regression model were implemented to address the frequency of different questions asked by mothers, as well as what types of questions are most related to children's concurrent syntactic skills.

Results: When controlling for total maternal utterances, results revealed that non-wh-questions and rhetorical/yes and no questions were the most frequent types of questions produced by mothers, in terms of structure and purpose, respectively. However, wh-questions were predominantly information-seeking questions. This is important, as the use of information-seeking wh-questions was positively associated with children's syntactic skills, as measured by the DELV, and resulted in children producing longer utterances in response to these questions, as determined by child mean length of utterance in words.

Conclusion: Taken together, these findings suggest maternal use of wh-questions aids syntactic skills in children ages 5-7 years, likely because they require a more syntactically complex response on the child's behalf.

Supplemental material: https://doi.org/10.23641/asha.27276891.

目的:本研究探讨了母亲的提问行为与子女句法技能之间的关系。母亲提问行为的一个重要方面是在与孩子交谈时使用复杂的问题。这些问题的目的和结构各不相同。提问的目的可能是为了寻求信息、教导或得到简单的 "是"/"否 "回答。问题甚至可以是修辞性的,根本不打算回答。从结构上讲,问题可以包含一个wh-词(谁、什么、何时、何地、为什么和如何),也可以不包含;但是,这些wh-问题非常重要,因为它们能引出孩子的话语,有助于词汇的发展。尽管wh-questions能引起儿童的反应,但这些问题与儿童的句法技能之间的关系仍不得而知:方法:34 个母子二人组参加了 15 分钟的半自然游戏课程。儿童年龄在 5 到 7 岁之间(中=6.26 岁,标差=1.04 岁;20 名女孩/14 名男孩)。语言变异诊断评估(DELV)用于测量儿童的句法技能。通过对语言记录的系统分析,根据问题的结构(Wh-questions 与非Wh-questions)和目的(寻求信息、教学或 "是/否 "和修辞性问题)对问题进行分类。我们采用了重复测量协方差分析和线性回归模型来研究母亲提出不同问题的频率,以及哪些类型的问题与儿童的并行句法技能关系最大:结果:在控制了母亲的话语总量后,结果显示,就结构和目的而言,母亲最常提出的问题类型分别是非wh-questions和修辞/是与否的问题。然而,wh-questions 主要是寻求信息的问题。这一点很重要,因为根据 DELV 的测量结果,使用寻求信息的 WH-questions 与儿童的句法技能呈正相关,而且根据儿童平均语篇长度(以字为单位)的测量结果,儿童在回答这些问题时会产生更长的语篇:综上所述,这些研究结果表明,母亲使用wh-questions有助于提高5-7岁儿童的句法技能,这可能是因为wh-questions需要儿童做出句法上更复杂的回答。补充材料:https://doi.org/10.23641/asha.27276891。
{"title":"Maternal Question Use Relates to Syntactic Skills in 5- to 7-Year-Old Children.","authors":"Grace Buckalew, Alexus G Ramirez, Julie M Schneider","doi":"10.1044/2024_JSLHR-23-00426","DOIUrl":"10.1044/2024_JSLHR-23-00426","url":null,"abstract":"<p><strong>Purpose: </strong>This study examined how mothers' question-asking behavior relates to their child's syntactic skills. One important aspect of maternal question-asking behavior is the use of complex questions when speaking with children. These questions can differ based on both their purpose and structure. The purpose may be to seek out information, to teach, or to get a simple yes/no response. Questions may even be rhetorical, with no answer intended at all. Structurally, questions can include a <i>wh</i>-word (<i>who</i>, <i>what</i>, <i>when</i>, <i>where</i>, <i>why</i>, and <i>how</i>) or not; however, these <i>wh</i>-questions are important because they elicit utterances from the child and support vocabulary development. Despite <i>wh</i>-questions eliciting a response from children, it remains unknown how these questions relate to children's syntactic skills.</p><p><strong>Method: </strong>Thirty-four mother-child dyads participated in a 15-min seminaturalistic play session. Children were between the ages of 5 and 7 years (<i>M</i> = 6.26 years, <i>SD</i> = 1.04 years; 20 girls/14 boys). The Diagnostic Evaluation of Language Variation (DELV) assessment was used to measure syntactic skills in children. Using the Systematic Analysis of Language Transcripts, questions were categorized based on structure (<i>wh</i>-questions vs. non-<i>wh</i>-questions) and purpose (information-seeking, pedagogical, or yes/no and rhetorical questions). A repeated-measures analysis of covariance and a linear regression model were implemented to address the frequency of different questions asked by mothers, as well as what types of questions are most related to children's concurrent syntactic skills.</p><p><strong>Results: </strong>When controlling for total maternal utterances, results revealed that non-<i>wh</i>-questions and rhetorical/yes and no questions were the most frequent types of questions produced by mothers, in terms of structure and purpose, respectively. However, <i>wh</i>-questions were predominantly information-seeking questions. This is important, as the use of information-seeking <i>wh</i>-questions was positively associated with children's syntactic skills, as measured by the DELV, and resulted in children producing longer utterances in response to these questions, as determined by child mean length of utterance in words.</p><p><strong>Conclusion: </strong>Taken together, these findings suggest maternal use of <i>wh</i>-questions aids syntactic skills in children ages 5-7 years, likely because they require a more syntactically complex response on the child's behalf.</p><p><strong>Supplemental material: </strong>https://doi.org/10.23641/asha.27276891.</p>","PeriodicalId":51254,"journal":{"name":"Journal of Speech Language and Hearing Research","volume":" ","pages":"4734-4747"},"PeriodicalIF":2.2,"publicationDate":"2024-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142548771","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Category-Sensitive Age-Related Shifts Between Prosodic and Semantic Dominance in Emotion Perception Linked to Cognitive Capacities. 与认知能力相关的情绪感知中前奏和语义优势之间的类别敏感性年龄迁移
IF 2.2 2区 医学 Q1 AUDIOLOGY & SPEECH-LANGUAGE PATHOLOGY Pub Date : 2024-12-09 Epub Date: 2024-11-04 DOI: 10.1044/2024_JSLHR-23-00817
Yi Lin, Xiaoqing Ye, Huaiyi Zhang, Fei Xu, Jingyu Zhang, Hongwei Ding, Yang Zhang

Purpose: Prior research extensively documented challenges in recognizing verbal and nonverbal emotion among older individuals when compared with younger counterparts. However, the nature of these age-related changes remains unclear. The present study investigated how older and younger adults comprehend four basic emotions (i.e., anger, happiness, neutrality, and sadness) conveyed through verbal (semantic) and nonverbal (facial and prosodic) channels.

Method: A total of 73 older adults (43 women, Mage = 70.18 years) and 74 younger adults (37 women, Mage = 22.01 years) partook in a fixed-choice test for recognizing emotions presented visually via facial expressions or auditorily through prosody or semantics.

Results: The results confirmed age-related decline in recognizing emotions across all channels except for identifying happy facial expressions. Furthermore, the two age groups demonstrated both commonalities and disparities in their inclinations toward specific channels. While both groups displayed a shared dominance of visual facial cues over auditory emotional signals, older adults indicated a preference for semantics, whereas younger adults displayed a preference for prosody in auditory emotion perception. Notably, the dominance effects observed in older adults for visual and semantic cues were less pronounced for sadness and anger compared to other emotions. These challenges in emotion recognition and the shifts in channel preferences among older adults were correlated with their general cognitive capabilities.

Conclusion: Together, the findings underscore that age-related obstacles in perceiving emotions and alterations in channel dominance, which vary by emotional category, are significantly intertwined with overall cognitive functioning.

Supplemental material: https://doi.org/10.23641/asha.27307251.

目的:先前的研究广泛记录了老年人在识别言语和非言语情绪方面与年轻人相比所面临的挑战。然而,这些与年龄有关的变化的性质仍不清楚。本研究调查了老年人和年轻人如何理解通过语言(语义)和非语言(面部和拟声)渠道传达的四种基本情绪(即愤怒、快乐、中立和悲伤):共有 73 名老年人(43 名女性,平均年龄为 70.18 岁)和 74 名年轻人(37 名女性,平均年龄为 22.01 岁)参加了一项固定选择测试,测试内容为识别通过面部表情视觉呈现的情绪,或通过拟声或语义听觉呈现的情绪:结果证实,除了识别快乐的面部表情外,在所有渠道识别情绪的能力都出现了与年龄相关的下降。此外,两个年龄组在倾向于特定渠道方面既有共同点,也有差异。虽然两个年龄组都显示出视觉面部线索比听觉情绪信号占优势,但在听觉情绪感知方面,老年人显示出对语义的偏好,而年轻人则显示出对拟声的偏好。值得注意的是,与其他情绪相比,在老年人身上观察到的视觉和语义线索的优势效应在悲伤和愤怒时并不明显。老年人在情绪识别方面面临的这些挑战以及在渠道偏好方面的变化与他们的一般认知能力有关:总之,研究结果表明,与年龄有关的情绪感知障碍和渠道主导性的改变(因情绪类别而异)与整体认知功能密切相关。补充材料:https://doi.org/10.23641/asha.27307251。
{"title":"Category-Sensitive Age-Related Shifts Between Prosodic and Semantic Dominance in Emotion Perception Linked to Cognitive Capacities.","authors":"Yi Lin, Xiaoqing Ye, Huaiyi Zhang, Fei Xu, Jingyu Zhang, Hongwei Ding, Yang Zhang","doi":"10.1044/2024_JSLHR-23-00817","DOIUrl":"10.1044/2024_JSLHR-23-00817","url":null,"abstract":"<p><strong>Purpose: </strong>Prior research extensively documented challenges in recognizing verbal and nonverbal emotion among older individuals when compared with younger counterparts. However, the nature of these age-related changes remains unclear. The present study investigated how older and younger adults comprehend four basic emotions (i.e., anger, happiness, neutrality, and sadness) conveyed through verbal (semantic) and nonverbal (facial and prosodic) channels.</p><p><strong>Method: </strong>A total of 73 older adults (43 women, <i>M</i><sub>age</sub> = 70.18 years) and 74 younger adults (37 women, <i>M</i><sub>age</sub> = 22.01 years) partook in a fixed-choice test for recognizing emotions presented visually via facial expressions or auditorily through prosody or semantics.</p><p><strong>Results: </strong>The results confirmed age-related decline in recognizing emotions across all channels except for identifying happy facial expressions. Furthermore, the two age groups demonstrated both commonalities and disparities in their inclinations toward specific channels. While both groups displayed a shared dominance of visual facial cues over auditory emotional signals, older adults indicated a preference for semantics, whereas younger adults displayed a preference for prosody in auditory emotion perception. Notably, the dominance effects observed in older adults for visual and semantic cues were less pronounced for sadness and anger compared to other emotions. These challenges in emotion recognition and the shifts in channel preferences among older adults were correlated with their general cognitive capabilities.</p><p><strong>Conclusion: </strong>Together, the findings underscore that age-related obstacles in perceiving emotions and alterations in channel dominance, which vary by emotional category, are significantly intertwined with overall cognitive functioning.</p><p><strong>Supplemental material: </strong>https://doi.org/10.23641/asha.27307251.</p>","PeriodicalId":51254,"journal":{"name":"Journal of Speech Language and Hearing Research","volume":" ","pages":"4829-4849"},"PeriodicalIF":2.2,"publicationDate":"2024-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142577325","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Inferring Word Class and Meaning From Spoken and Written Texts: A Comparison of Children With and Without Developmental Language Disorder. 从口语和书面文本中推断词类和词义:有发育性语言障碍和无发育性语言障碍儿童的比较。
IF 2.2 2区 医学 Q1 AUDIOLOGY & SPEECH-LANGUAGE PATHOLOGY Pub Date : 2024-12-09 Epub Date: 2024-11-26 DOI: 10.1044/2024_JSLHR-23-00743
Karla K McGregor, Ron Pomper, Nichole Eden, Margo Appenzeller, Timothy Arbisi-Kelm, Elaina Polese, Deborah K Reed

Purpose: The aim of the study was to determine the ability of children with developmental language disorder (DLD) to infer word class and meaning from text and to document variations by word class (noun, verb, adjective) and modality (listening, reading). We also asked whether the children could integrate global cues across the entire passage as well as local cues from the immediate sentence frame to support inferences.

Method: Fourth graders with DLD (n = 28) and typical language development (TLD; n = 41) read and listened to expository texts and guessed the noun, verb, and adjective removed from each. Adults (n = 20) completed the task to establish a baseline of correct responses. We used latent semantic analysis (LSA) to determine the semantic fit of the responses to the texts and to determine whether global cues were more difficult for children with DLD than local cues.

Results: The DLD group was 24% less accurate than the TLD group. In both diagnostic groups, accuracy varied by word class (nouns > adjectives > verbs) but not modality (reading = listening). Word class errors were rare, and errors of semantic fit were frequent. LSA cosines were higher for correct responses relative to the passage as a whole than the immediate sentence frame, suggesting that both groups mined the more extensive information in the global cues to support inferences. Compared to the TLD group, the DLD group tended to make "worse" errors: repeating words from the sentence frame or coming up with no response at all. Accuracy in the DLD group, but not the TLD group, was related to vocabulary knowledge. When the two groups were collapsed, scores on verbal short-term/working memory and sustained attention also predicted performance, but weaknesses in these aspects of executive function on the part of individuals with DLD did not fully explain the difference between the performance of the DLD and TLD groups.

Conclusions: Whether listening or reading, fourth graders with DLD are less able to infer word meaning from texts than their age-mates. The problem reflects, in part, deficits in executive function and lexical semantic knowledge.

目的:本研究旨在确定发育性语言障碍(DLD)儿童从文本中推断词类和词义的能力,并记录词类(名词、动词、形容词)和模式(听力、阅读)的变化。我们还询问孩子们是否能整合整个段落的整体线索以及来自直接句子框架的局部线索来支持推断:方法:患有 DLD(n = 28)和典型语言发育不良(TLD;n = 41)的四年级学生阅读和聆听说明性文章,并猜测每篇文章中的名词、动词和形容词。成人(n = 20)也完成了这项任务,以建立正确回答的基线。我们使用潜在语义分析(LSA)来确定回答与文本的语义契合度,并确定对于 DLD 儿童来说,全局线索是否比局部线索更难:结果:DLD 组的准确率比 TLD 组低 24%。在这两个诊断组中,准确率因词类(名词 > 形容词 > 动词)而异,但不因模式(阅读 = 听力)而异。词类错误很少见,而语义匹配错误却很常见。相对于整个段落而言,LSA 的余弦值高于直接句子框架,这表明两组学生都利用了全局线索中更广泛的信息来支持推断。与 TLD 组相比,DLD 组往往会犯 "更严重 "的错误:重复句子框架中的单词或根本没有做出任何反应。DLD 组(而非 TLD 组)的准确性与词汇知识有关。如果将两组学生的成绩相加,言语短期/工作记忆和持续注意力的得分也能预测学生的成绩,但是,DLD 学生在这些方面的执行功能缺陷并不能完全解释 DLD 组和 TLD 组之间的成绩差异:无论是听力还是阅读,四年级的 DLD 学生从文章中推断词义的能力都比同龄人差。这一问题在一定程度上反映了执行功能和词汇语义知识的缺陷。
{"title":"Inferring Word Class and Meaning From Spoken and Written Texts: A Comparison of Children With and Without Developmental Language Disorder.","authors":"Karla K McGregor, Ron Pomper, Nichole Eden, Margo Appenzeller, Timothy Arbisi-Kelm, Elaina Polese, Deborah K Reed","doi":"10.1044/2024_JSLHR-23-00743","DOIUrl":"10.1044/2024_JSLHR-23-00743","url":null,"abstract":"<p><strong>Purpose: </strong>The aim of the study was to determine the ability of children with developmental language disorder (DLD) to infer word class and meaning from text and to document variations by word class (noun, verb, adjective) and modality (listening, reading). We also asked whether the children could integrate global cues across the entire passage as well as local cues from the immediate sentence frame to support inferences.</p><p><strong>Method: </strong>Fourth graders with DLD (<i>n</i> = 28) and typical language development (TLD; <i>n</i> = 41) read and listened to expository texts and guessed the noun, verb, and adjective removed from each. Adults (<i>n</i> = 20) completed the task to establish a baseline of correct responses. We used latent semantic analysis (LSA) to determine the semantic fit of the responses to the texts and to determine whether global cues were more difficult for children with DLD than local cues.</p><p><strong>Results: </strong>The DLD group was 24% less accurate than the TLD group. In both diagnostic groups, accuracy varied by word class (nouns > adjectives > verbs) but not modality (reading = listening). Word class errors were rare, and errors of semantic fit were frequent. LSA cosines were higher for correct responses relative to the passage as a whole than the immediate sentence frame, suggesting that both groups mined the more extensive information in the global cues to support inferences. Compared to the TLD group, the DLD group tended to make \"worse\" errors: repeating words from the sentence frame or coming up with no response at all. Accuracy in the DLD group, but not the TLD group, was related to vocabulary knowledge. When the two groups were collapsed, scores on verbal short-term/working memory and sustained attention also predicted performance, but weaknesses in these aspects of executive function on the part of individuals with DLD did not fully explain the difference between the performance of the DLD and TLD groups.</p><p><strong>Conclusions: </strong>Whether listening or reading, fourth graders with DLD are less able to infer word meaning from texts than their age-mates. The problem reflects, in part, deficits in executive function and lexical semantic knowledge.</p>","PeriodicalId":51254,"journal":{"name":"Journal of Speech Language and Hearing Research","volume":" ","pages":"4783-4798"},"PeriodicalIF":2.2,"publicationDate":"2024-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142717697","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Characterizing Physiologic Swallowing Impairment Profiles: A Large-Scale Exploratory Study of Head and Neck Cancer, Stroke, Chronic Obstructive Pulmonary Disease, Dementia, and Parkinson's Disease. 生理性吞咽障碍特征描述:一项针对头颈癌、中风、慢性阻塞性肺病、痴呆症和帕金森病的大规模探索性研究。
IF 2.2 2区 医学 Q1 AUDIOLOGY & SPEECH-LANGUAGE PATHOLOGY Pub Date : 2024-12-09 Epub Date: 2024-11-18 DOI: 10.1044/2024_JSLHR-24-00091
Alex E Clain, Noelle Samia, Kate Davidson, Bonnie Martin-Harris

Purpose: The purpose of the present study was to use a large swallowing database to explore and compare the swallow-physiology impairment profiles of five dysphagia-associated diagnoses: chronic obstructive pulmonary disease (COPD), dementia, head and neck cancer (HNC), Parkinson's disease (PD), and stroke.

Method: A total of 8,190 patients across five diagnoses were extracted from a de-identified swallowing database, that is, the Modified Barium Swallow Impairment Profile Swallowing Data Registry, for the present exploratory cross-sectional analysis. To identify the impairment profiles of the five diagnoses, we fit 18 partial proportional odds models, one for each of the 17 Modified Barium Swallow Impairment Profile components and the Penetration-Aspiration Scale, with impairment score as the dependent variable and diagnoses, age, sex, and race as the independent variables with interactions between age and diagnoses and between PD and dementia (in effect creating a PD with dementia [PDwDem] group). For components with > 5% missingness, we applied inverse probability weighting to correct for bias.

Results: PD and COPD did not significantly differ on 13 of the 18 outcome variables (all ps > .02). Dementia, stroke, and PDwDem all showed worse impairments than COPD or PD on five of six oral components (all ps < .007). HNC had worse impairment than all diagnoses except PDwDem for nine of 10 pharyngeal components (all ps < .006). Stroke and HNC had worse penetration/aspiration than all other diagnoses (all ps < .003).

Conclusions: The present results show that there are both common and differing impairment profiles among these five diagnoses. These commonalities and differences in profiles provide a basis for the generation of hypotheses about the nature and severity of dysphagia in these populations. These results are also likely highly generalizable given the size and representativeness of the data set.

Supplemental material: https://doi.org/10.23641/asha.27478245.

目的:本研究的目的是利用大型吞咽数据库探索和比较五种吞咽困难相关诊断的吞咽生理学损伤特征:慢性阻塞性肺病(COPD)、痴呆、头颈部癌症(HNC)、帕金森病(PD)和中风:本探索性横断面分析从去标识化吞咽数据库(即改良吞咽钡损伤特征吞咽数据登记处)中提取了五种诊断的 8190 名患者。为了确定五种诊断的损伤概况,我们拟合了 18 个偏比例几率模型,17 个 "改良型吞咽钡损伤概况 "成分和 "穿刺-吞咽量表 "各一个,损伤评分为因变量,诊断、年龄、性别和种族为自变量,年龄与诊断之间以及与帕金森病和痴呆之间存在交互作用(实际上创建了一个帕金森病伴痴呆[PDwDem]组)。对于缺失率>5%的成分,我们采用了反概率加权法来纠正偏差:在 18 个结果变量中,有 13 个结果变量与慢性阻塞性肺病无显著差异(所有 ps 均大于 0.02)。痴呆症、中风和慢性阻塞性肺病患者在六项口腔成分中的五项上都比慢性阻塞性肺病或慢性阻塞性肺病患者的损伤更严重(所有 ps 均小于 .007)。在 10 个咽部检查项目中,HNC 比除 PDwDem 以外的所有诊断项目中的 9 个都更严重(所有 ps 均小于 .006)。中风和 HNC 的穿透力/吸入力比所有其他诊断更差(所有 ps < .003):本研究结果表明,这五种诊断既有共同的损伤特征,也有不同的损伤特征。这些共同点和不同点为提出有关这些人群吞咽困难的性质和严重程度的假设提供了依据。鉴于数据集的规模和代表性,这些结果也可能具有很强的普遍性。补充材料:https://doi.org/10.23641/asha.27478245。
{"title":"Characterizing Physiologic Swallowing Impairment Profiles: A Large-Scale Exploratory Study of Head and Neck Cancer, Stroke, Chronic Obstructive Pulmonary Disease, Dementia, and Parkinson's Disease.","authors":"Alex E Clain, Noelle Samia, Kate Davidson, Bonnie Martin-Harris","doi":"10.1044/2024_JSLHR-24-00091","DOIUrl":"10.1044/2024_JSLHR-24-00091","url":null,"abstract":"<p><strong>Purpose: </strong>The purpose of the present study was to use a large swallowing database to explore and compare the swallow-physiology impairment profiles of five dysphagia-associated diagnoses: chronic obstructive pulmonary disease (COPD), dementia, head and neck cancer (HNC), Parkinson's disease (PD), and stroke.</p><p><strong>Method: </strong>A total of 8,190 patients across five diagnoses were extracted from a de-identified swallowing database, that is, the Modified Barium Swallow Impairment Profile Swallowing Data Registry, for the present exploratory cross-sectional analysis. To identify the impairment profiles of the five diagnoses, we fit 18 partial proportional odds models, one for each of the 17 Modified Barium Swallow Impairment Profile components and the Penetration-Aspiration Scale, with impairment score as the dependent variable and diagnoses, age, sex, and race as the independent variables with interactions between age and diagnoses and between PD and dementia (in effect creating a PD with dementia [PDwDem] group). For components with > 5% missingness, we applied inverse probability weighting to correct for bias.</p><p><strong>Results: </strong>PD and COPD did not significantly differ on 13 of the 18 outcome variables (all <i>p</i>s > .02). Dementia, stroke, and PDwDem all showed worse impairments than COPD or PD on five of six oral components (all <i>p</i>s < .007). HNC had worse impairment than all diagnoses except PDwDem for nine of 10 pharyngeal components (all <i>p</i>s < .006). Stroke and HNC had worse penetration/aspiration than all other diagnoses (all <i>p</i>s < .003).</p><p><strong>Conclusions: </strong>The present results show that there are both common and differing impairment profiles among these five diagnoses. These commonalities and differences in profiles provide a basis for the generation of hypotheses about the nature and severity of dysphagia in these populations. These results are also likely highly generalizable given the size and representativeness of the data set.</p><p><strong>Supplemental material: </strong>https://doi.org/10.23641/asha.27478245.</p>","PeriodicalId":51254,"journal":{"name":"Journal of Speech Language and Hearing Research","volume":" ","pages":"4689-4713"},"PeriodicalIF":2.2,"publicationDate":"2024-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11667003/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142649114","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Executive Function Associations With Audibility-Adjusted Speech Perception in Noise. 执行功能与噪音中的听觉调整语音感知相关联。
IF 2.2 2区 医学 Q1 AUDIOLOGY & SPEECH-LANGUAGE PATHOLOGY Pub Date : 2024-12-09 Epub Date: 2024-10-30 DOI: 10.1044/2024_JSLHR-24-00333
Mark A Eckert, Lois J Matthews, Kenneth I Vaden, Judy R Dubno

Purpose: Speech recognition in noise is challenging for listeners and appears to require support from executive functions to focus attention on rapidly unfolding target speech, track misunderstanding, and sustain attention. The current study was designed to test the hypothesis that lower executive function abilities explain poorer speech recognition in noise, including among older participants with hearing loss who often exhibit diminished speech recognition in noise and cognitive abilities.

Method: A cross-sectional sample of 400 younger-to-older adult participants (19 to < 90 years of age) from the community-based Medical University of South CarolinaLongitudinal Cohort Study of Age-related Hearing Loss were administered tasks with executive control demands to assess individual variability in a card-sorting measure of set-shifting/performance monitoring, a dichotic listening measure of selective attention/working memory, sustained attention, and processing speed. Key word recognition in the high- and low-context speech perception-in-noise (SPIN) tests provided measures of speech recognition in noise. The SPIN scores were adjusted for audibility using the Articulation Index to characterize the impact of varied hearing sensitivity unrelated to reduced audibility on cognitive and speech recognition associations.

Results: Set-shifting, dichotic listening, and processing speed each explained unique and significant variance in audibility-adjusted, low-context SPIN scores (ps < .001), including after controlling for age, pure-tone threshold average (PTA), sex, and education level. The dichotic listening and processing speed effect sizes were significantly diminished when controlling for PTA, indicating that participants with poorer hearing sensitivity were also likely to have lower executive function and lower audibility-adjusted speech recognition.

Conclusions: Poor set-shifting/performance monitoring, slow processing speed, and poor selective attention/working memory appeared to partially explain difficulties with speech recognition in noise after accounting for audibility. These results are consistent with the premise that distinct executive functions support speech recognition in noise.

目的:噪声中的语音识别对听者来说具有挑战性,似乎需要执行功能的支持才能将注意力集中在快速展开的目标语音上、跟踪误解并保持注意力。目前的研究旨在验证一个假设,即较低的执行功能能力可以解释噪声中较低的语音识别能力,包括老年听力损失参与者,他们通常表现出较低的噪声语音识别能力和认知能力:我们对南卡罗来纳医科大学老年性听力损失纵向队列研究(Medical University of South CarolinaLongitudinal Cohort Study of Age-related Hearing Loss)中的 400 名从年轻到年长的成年参与者(19 岁至小于 90 岁)进行了横断面抽样调查,这些参与者都接受了具有执行控制要求的任务,以评估他们在集合转换/执行监控的卡片分类测量、选择性注意/工作记忆的二分听测量、持续注意和处理速度方面的个体差异。高语境和低语境噪声语音感知(SPIN)测试中的关键词识别提供了噪声语音识别的测量结果。SPIN 分数使用发音指数对可听度进行调整,以确定与可听度降低无关的不同听力敏感度对认知和语音识别关联的影响:包括控制年龄、纯音阈值平均值 (PTA)、性别和教育水平后,集合转换、二分法听力和处理速度都能解释经听力调整的低语境 SPIN 分数中独特而显著的差异(ps < .001)。在控制了纯音阈值平均值(PTA)后,二分听力和处理速度的效应大小明显减小,这表明听力敏感度较差的参与者也可能具有较低的执行功能和较低的听力调整后语音识别能力:结论:在考虑了可听度因素后,设定转换/执行监控能力差、处理速度慢和选择性注意/工作记忆差似乎可以部分解释噪声中的语音识别困难。这些结果与独特的执行功能支持噪音中的语音识别这一前提是一致的。
{"title":"Executive Function Associations With Audibility-Adjusted Speech Perception in Noise.","authors":"Mark A Eckert, Lois J Matthews, Kenneth I Vaden, Judy R Dubno","doi":"10.1044/2024_JSLHR-24-00333","DOIUrl":"10.1044/2024_JSLHR-24-00333","url":null,"abstract":"<p><strong>Purpose: </strong>Speech recognition in noise is challenging for listeners and appears to require support from executive functions to focus attention on rapidly unfolding target speech, track misunderstanding, and sustain attention. The current study was designed to test the hypothesis that lower executive function abilities explain poorer speech recognition in noise, including among older participants with hearing loss who often exhibit diminished speech recognition in noise and cognitive abilities.</p><p><strong>Method: </strong>A cross-sectional sample of 400 younger-to-older adult participants (19 to < 90 years of age) from the community-based Medical University of South CarolinaLongitudinal Cohort Study of Age-related Hearing Loss were administered tasks with executive control demands to assess individual variability in a card-sorting measure of set-shifting/performance monitoring, a dichotic listening measure of selective attention/working memory, sustained attention, and processing speed. Key word recognition in the high- and low-context speech perception-in-noise (SPIN) tests provided measures of speech recognition in noise. The SPIN scores were adjusted for audibility using the Articulation Index to characterize the impact of varied hearing sensitivity unrelated to reduced audibility on cognitive and speech recognition associations.</p><p><strong>Results: </strong>Set-shifting, dichotic listening, and processing speed each explained unique and significant variance in audibility-adjusted, low-context SPIN scores (<i>p</i>s < .001), including after controlling for age, pure-tone threshold average (PTA), sex, and education level. The dichotic listening and processing speed effect sizes were significantly diminished when controlling for PTA, indicating that participants with poorer hearing sensitivity were also likely to have lower executive function and lower audibility-adjusted speech recognition.</p><p><strong>Conclusions: </strong>Poor set-shifting/performance monitoring, slow processing speed, and poor selective attention/working memory appeared to partially explain difficulties with speech recognition in noise after accounting for audibility. These results are consistent with the premise that distinct executive functions support speech recognition in noise.</p>","PeriodicalId":51254,"journal":{"name":"Journal of Speech Language and Hearing Research","volume":" ","pages":"4811-4828"},"PeriodicalIF":2.2,"publicationDate":"2024-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11666980/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142548770","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Accurately Identifying Language Disorder in School-Age Children Using Dynamic Assessment of Narrative Language. 利用叙事语言动态评估准确识别学龄儿童的语言障碍。
IF 2.2 2区 医学 Q1 AUDIOLOGY & SPEECH-LANGUAGE PATHOLOGY Pub Date : 2024-12-09 Epub Date: 2024-11-21 DOI: 10.1044/2024_JSLHR-23-00594
Douglas B Petersen, Alisa Konishi-Therkildsen, Kallie Dawn Clark, Anahi Kamila DeRobles, Ashley Elizabeth Frahm, Kristi Jones, Camryn Lettich, Trina D Spencer

Purpose: Several studies have demonstrated that dynamic assessment can be a less biased, valid approach for the identification of language disorder among diverse school-age children. However, all prior studies have included a relatively small number of participants, which is generally not adequate for psychometric research. This is the first large-scale study to (a) examine whether a dynamic assessment of narrative language yields indifferent outcomes regardless of several demographic variables including age, race/ethnicity, multilingualism, or gender; (b) examine the sensitivity and specificity of the dynamic assessment of language among a large sample of students with and without language disorder; and (c) identify specific cut-points by grade to provide clinically useful data.

Method: Participants included 634 diverse first- through fifth-grade students with and without language learning disorder. Students were confirmed as having a language disorder using a triangulation technique involving several sources of data. A dynamic assessment of narrative language, which took approximately 10 min, was administered to all students.

Results: Results indicated that the dynamic assessment had excellent (> 90%) sensitivity and specificity and that modifiability scores were not meaningfully different across any of the demographic variables.

Conclusions: The dynamic assessment of narrative language accurately identified language disorder across all student demographic groups. These findings suggest that dynamic assessment may provide less biased classification than traditional, static forms of assessment.

目的:多项研究表明,动态评估是一种较少偏差、有效的方法,可用于识别不同学龄儿童的语言障碍。然而,以往所有研究的参与者人数都相对较少,通常不足以进行心理计量学研究。这是第一项大规模研究,目的是:(a)检验叙事语言动态评估是否会产生与年龄、种族/民族、多语言或性别等人口统计学变量无关的结果;(b)检验语言动态评估在有语言障碍和无语言障碍的大样本学生中的敏感性和特异性;以及(c)确定各年级的具体切点,以提供对临床有用的数据:参与者包括 634 名不同的一年级至五年级学生,包括有语言学习障碍和没有语言学习障碍的学生。采用三角测量法确认学生是否患有语言障碍。对所有学生进行叙事语言动态评估,耗时约 10 分钟:结果表明,动态评估的灵敏度和特异性都非常好(大于 90%),而且在任何人口统计学变量中,可修改性得分都没有明显差异:结论:叙事语言动态评估能准确识别所有人口统计群体学生的语言障碍。这些研究结果表明,与传统的静态评估形式相比,动态评估的分类偏差较小。
{"title":"Accurately Identifying Language Disorder in School-Age Children Using Dynamic Assessment of Narrative Language.","authors":"Douglas B Petersen, Alisa Konishi-Therkildsen, Kallie Dawn Clark, Anahi Kamila DeRobles, Ashley Elizabeth Frahm, Kristi Jones, Camryn Lettich, Trina D Spencer","doi":"10.1044/2024_JSLHR-23-00594","DOIUrl":"10.1044/2024_JSLHR-23-00594","url":null,"abstract":"<p><strong>Purpose: </strong>Several studies have demonstrated that dynamic assessment can be a less biased, valid approach for the identification of language disorder among diverse school-age children. However, all prior studies have included a relatively small number of participants, which is generally not adequate for psychometric research. This is the first large-scale study to (a) examine whether a dynamic assessment of narrative language yields indifferent outcomes regardless of several demographic variables including age, race/ethnicity, multilingualism, or gender; (b) examine the sensitivity and specificity of the dynamic assessment of language among a large sample of students with and without language disorder; and (c) identify specific cut-points by grade to provide clinically useful data.</p><p><strong>Method: </strong>Participants included 634 diverse first- through fifth-grade students with and without language learning disorder. Students were confirmed as having a language disorder using a triangulation technique involving several sources of data. A dynamic assessment of narrative language, which took approximately 10 min, was administered to all students.</p><p><strong>Results: </strong>Results indicated that the dynamic assessment had excellent (> 90%) sensitivity and specificity and that modifiability scores were not meaningfully different across any of the demographic variables.</p><p><strong>Conclusions: </strong>The dynamic assessment of narrative language accurately identified language disorder across all student demographic groups. These findings suggest that dynamic assessment may provide less biased classification than traditional, static forms of assessment.</p>","PeriodicalId":51254,"journal":{"name":"Journal of Speech Language and Hearing Research","volume":" ","pages":"4765-4782"},"PeriodicalIF":2.2,"publicationDate":"2024-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142689495","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Development of Verb Inflectional Complexity in Palestinian Arabic. 巴勒斯坦阿拉伯语动词屈折复数的发展。
IF 2.2 2区 医学 Q1 AUDIOLOGY & SPEECH-LANGUAGE PATHOLOGY Pub Date : 2024-12-03 DOI: 10.1044/2024_JSLHR-23-00722
Roni Henkin-Roitfarb, Sigal Uziel, Rozana Ishaq

Purpose: This study describes the development of verb inflectional morphology in an urban dialect of Palestinian Arabic (PA) spoken in northern Israel, specifically in the city of Haifa, and explores the effect of language typology on acquisition.

Method: We analyzed naturalistic longitudinal speech samples from one monolingual Arabic-speaking girl aged 1;11-2;3 during spontaneous interactions with family members.

Results: Initially, truncated forms ("bare stems") were common but disappeared by the end of the study. By age 1;11, the girl was in the proto-morphological stage, displaying clear three-member mini-paradigms. Affixation complexity gradually increased, with adjacent and obligatory suffixes acquired before distant and optional prefixes. The early acquisition of indicative prefixes (b-, m-) preceded the later emergence of complex proclitics (e.g., volitive d-, progressive ʕam), suggesting gradual, systematic morphological acquisition.

Conclusions: We propose three principles for the development of PA verb inflection: (a) Adjacency: Affixes adjacent to the base are acquired first. (b) R-salience: Suffixes are acquired earlier than prefixes. (c) Obligatoriness: Obligatory morphemes precede optional ones. These principles predict the girl's morphological development and reflect sensitivity to PA's richly inflecting typology. This study highlights the need for detailed descriptive research that is essential for understanding language acquisition processes and informing assessment tools, intervention programs, and educational curricula for PA-speaking children.

目的:本研究描述了以色列北部,特别是海法市巴勒斯坦阿拉伯语(PA)城市方言中动词屈折形态的发展,并探讨了语言类型学对习得的影响。方法:我们分析了一名1、11-2、3岁单语阿拉伯语女孩在与家人自发互动时的自然纵向语音样本。结果:最初,截断形式(“裸茎”)很常见,但在研究结束时消失了。到11岁时,女孩处于原始形态阶段,显示出清晰的三人迷你范式。词缀的复杂性逐渐增加,在远前缀和可选前缀之前获得了邻近的和必须的后缀。指示性前缀(b-, m-)的早期习得先于后来出现的复杂词源学(例如,词源学的d-,渐进式的词源学),表明逐渐的、系统的形态习得。结论:我们提出了PA动词屈折发展的三个原则:(a)邻接性:首先获得与词根相邻的词缀。(b) r -显著性:后缀比前缀更早获得。(c)强制性:强制性语素先于可选语素。这些原则预测了女孩的形态发育,并反映了对PA丰富的类型学的敏感性。这项研究强调了详细的描述性研究的必要性,这对于理解语言习得过程和为说pa的儿童提供评估工具、干预计划和教育课程至关重要。
{"title":"Development of Verb Inflectional Complexity in Palestinian Arabic.","authors":"Roni Henkin-Roitfarb, Sigal Uziel, Rozana Ishaq","doi":"10.1044/2024_JSLHR-23-00722","DOIUrl":"https://doi.org/10.1044/2024_JSLHR-23-00722","url":null,"abstract":"<p><strong>Purpose: </strong>This study describes the development of verb inflectional morphology in an urban dialect of Palestinian Arabic (PA) spoken in northern Israel, specifically in the city of Haifa, and explores the effect of language typology on acquisition.</p><p><strong>Method: </strong>We analyzed naturalistic longitudinal speech samples from one monolingual Arabic-speaking girl aged 1;11-2;3 during spontaneous interactions with family members.</p><p><strong>Results: </strong>Initially, truncated forms (\"bare stems\") were common but disappeared by the end of the study. By age 1;11, the girl was in the proto-morphological stage, displaying clear three-member mini-paradigms. Affixation complexity gradually increased, with adjacent and obligatory suffixes acquired before distant and optional prefixes. The early acquisition of indicative prefixes (<i>b-</i>, <i>m-</i>) preceded the later emergence of complex proclitics (e.g., volitive <i>d-</i>, progressive <i>ʕam</i>), suggesting gradual, systematic morphological acquisition.</p><p><strong>Conclusions: </strong>We propose three principles for the development of PA verb inflection: (a) Adjacency: Affixes adjacent to the base are acquired first. (b) R-salience: Suffixes are acquired earlier than prefixes. (c) Obligatoriness: Obligatory morphemes precede optional ones. These principles predict the girl's morphological development and reflect sensitivity to PA's richly inflecting typology. This study highlights the need for detailed descriptive research that is essential for understanding language acquisition processes and informing assessment tools, intervention programs, and educational curricula for PA-speaking children.</p>","PeriodicalId":51254,"journal":{"name":"Journal of Speech Language and Hearing Research","volume":" ","pages":"1-22"},"PeriodicalIF":2.2,"publicationDate":"2024-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142774374","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Cortical Tracking of Speech Is Reduced in Adults Who Stutter When Listening for Speaking. 口吃成人在听讲时对语音的大脑皮层跟踪能力下降。
IF 2.2 2区 医学 Q1 AUDIOLOGY & SPEECH-LANGUAGE PATHOLOGY Pub Date : 2024-11-07 Epub Date: 2024-10-22 DOI: 10.1044/2024_JSLHR-24-00227
Simone Gastaldon, Pierpaolo Busan, Nicola Molinaro, Mikel Lizarazu

Purpose: The purpose of this study was to investigate cortical tracking of speech (CTS) in adults who stutter (AWS) compared to typically fluent adults (TFAs) to test the involvement of the speech-motor network in tracking rhythmic speech information.

Method: Participants' electroencephalogram was recorded while they simply listened to sentences (listening only) or completed them by naming a picture (listening for speaking), thus manipulating the upcoming involvement of speech production. We analyzed speech-brain coherence and brain connectivity during listening.

Results: During the listening-for-speaking task, AWS exhibited reduced CTS in the 3- to 5-Hz range (theta), corresponding to the syllabic rhythm. The effect was localized in the left inferior parietal and right pre/supplementary motor regions. Connectivity analyses revealed that TFAs had stronger information transfer in the theta range in both tasks in fronto-temporo-parietal regions. When considering the whole sample of participants, increased connectivity from the right superior temporal cortex to the left sensorimotor cortex was correlated with faster naming times in the listening-for-speaking task.

Conclusions: Atypical speech-motor functioning in stuttering impacts speech perception, especially in situations requiring articulatory alertness. The involvement of frontal and (pre)motor regions in CTS in TFAs is highlighted. Further investigation is needed into speech perception in individuals with speech-motor deficits, especially when smooth transitioning between listening and speaking is required, such as in real-life conversational settings.

Supplemental material: https://doi.org/10.23641/asha.27234885.

目的:本研究旨在调查与典型流利成人(TFAs)相比,口吃成人(AWS)的大脑皮层语音跟踪(CTS)情况,以测试语音运动网络在跟踪有节奏的语音信息时的参与情况:方法:记录参与者的脑电图,同时他们只听句子(只听)或通过命名图片完成句子(听为说),从而操纵即将参与的语音生成。我们分析了听的过程中语音-大脑连贯性和大脑连通性:结果:在 "以听促说 "任务中,AWS 在 3 至 5 赫兹范围内(theta)表现出的 CTS 减少,与音节节奏相对应。这种效应集中在左侧下顶叶和右侧前/辅助运动区。连通性分析表明,在这两项任务中,TFAs 在前颞顶叶区域的 theta 范围内具有更强的信息传递能力。在考虑所有参与者样本时,右侧上颞皮层与左侧感觉运动皮层的连接性增加与 "以听代说 "任务中更快的命名时间相关:结论:口吃患者的非典型言语运动功能会影响言语感知,尤其是在需要保持发音警觉的情况下。这凸显了全口吃患者的前额和(前)运动区域参与了 CTS。需要进一步研究语言运动障碍患者的言语感知能力,尤其是当需要在听和说之间平稳过渡时,如在现实对话环境中。补充材料:https://doi.org/10.23641/asha.27234885。
{"title":"Cortical Tracking of Speech Is Reduced in Adults Who Stutter When Listening for Speaking.","authors":"Simone Gastaldon, Pierpaolo Busan, Nicola Molinaro, Mikel Lizarazu","doi":"10.1044/2024_JSLHR-24-00227","DOIUrl":"10.1044/2024_JSLHR-24-00227","url":null,"abstract":"<p><strong>Purpose: </strong>The purpose of this study was to investigate cortical tracking of speech (CTS) in adults who stutter (AWS) compared to typically fluent adults (TFAs) to test the involvement of the speech-motor network in tracking rhythmic speech information.</p><p><strong>Method: </strong>Participants' electroencephalogram was recorded while they simply listened to sentences (listening only) or completed them by naming a picture (listening for speaking), thus manipulating the upcoming involvement of speech production. We analyzed speech-brain coherence and brain connectivity during listening.</p><p><strong>Results: </strong>During the listening-for-speaking task, AWS exhibited reduced CTS in the 3- to 5-Hz range (theta), corresponding to the syllabic rhythm. The effect was localized in the left inferior parietal and right pre/supplementary motor regions. Connectivity analyses revealed that TFAs had stronger information transfer in the theta range in both tasks in fronto-temporo-parietal regions. When considering the whole sample of participants, increased connectivity from the right superior temporal cortex to the left sensorimotor cortex was correlated with faster naming times in the listening-for-speaking task.</p><p><strong>Conclusions: </strong>Atypical speech-motor functioning in stuttering impacts speech perception, especially in situations requiring articulatory alertness. The involvement of frontal and (pre)motor regions in CTS in TFAs is highlighted. Further investigation is needed into speech perception in individuals with speech-motor deficits, especially when smooth transitioning between listening and speaking is required, such as in real-life conversational settings.</p><p><strong>Supplemental material: </strong>https://doi.org/10.23641/asha.27234885.</p>","PeriodicalId":51254,"journal":{"name":"Journal of Speech Language and Hearing Research","volume":" ","pages":"4339-4357"},"PeriodicalIF":2.2,"publicationDate":"2024-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142512653","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Hearing Impairment: Reduced Pupil Dilation Response and Frontal Activation During Degraded Speech Perception. 听力障碍:语音感知能力下降时的瞳孔扩张反应和额叶激活减少。
IF 2.2 2区 医学 Q1 AUDIOLOGY & SPEECH-LANGUAGE PATHOLOGY Pub Date : 2024-11-07 Epub Date: 2024-10-11 DOI: 10.1044/2024_JSLHR-24-00017
Adriana A Zekveld, Sophia E Kramer, Dirk J Heslenfeld, Niek J Versfeld, Chris Vriend

Purpose: A relevant aspect of listening is the effort required during speech processing, which can be assessed by pupillometry. Here, we assessed the pupil dilation response of normal-hearing (NH) and hard of hearing (HH) individuals during listening to clear sentences and masked or degraded sentences. We combined this assessment with functional magnetic resonance imaging (fMRI) to investigate the neural correlates of the pupil dilation response.

Method: Seventeen NH participants (Mage = 46 years) were compared to 17 HH participants (Mage = 45 years) who were individually matched in age and educational level. Participants repeated sentences that were presented clearly, that were distorted, or that were masked. The sentence intelligibility level of masked and distorted sentences was 50% correct. Silent baseline trials were presented as well. Performance measures, pupil dilation responses, and fMRI data were acquired.

Results: HH individuals had overall poorer speech reception than the NH participants, but not for noise-vocoded speech. In addition, an interaction effect was observed with smaller pupil dilation responses in HH than in NH listeners for the degraded speech conditions. Hearing impairment was associated with higher activation across conditions in the left superior temporal gyrus, as compared to the silent baseline. However, the region of interest analysis indicated lower activation during degraded speech relative to clear speech in bilateral frontal regions and the insular cortex, for HH compared to NH listeners. Hearing impairment was also associated with a weaker relation between the pupil response and activation in the right inferior frontal gyrus. Overall, degraded speech evoked higher frontal activation than clear speech.

Conclusion: Brain areas associated with attentional and cognitive-control processes may be increasingly recruited when speech is degraded and are related to the pupil dilation response, but this relationship is weaker in HH listeners.

Supplemental material: https://doi.org/10.23641/asha.27162135.

目的:听力的一个相关方面是语音处理过程中所需的努力,这可以通过瞳孔测量进行评估。在此,我们评估了听力正常(NH)和听力困难(HH)的人在听清晰句子和遮蔽或降级句子时的瞳孔放大反应。我们将这一评估与功能磁共振成像(fMRI)相结合,研究瞳孔放大反应的神经相关性:我们将 17 名 NH 参与者(年龄 = 46 岁)与 17 名 HH 参与者(年龄 = 45 岁)进行了比较,后者在年龄和教育程度上都是匹配的。受试者重复清晰呈现、扭曲呈现或被遮蔽的句子。被遮蔽和扭曲的句子的正确率为 50%。同时还进行无声基线试验。测试结果、瞳孔放大反应和 fMRI 数据均已获得:结果:与正常人相比,高危人群对语音的接收能力总体较差,但对噪声编码语音的接收能力则不尽相同。此外,在降级语音条件下,HH 听者的瞳孔放大反应小于 NH 听者,这表明存在交互效应。与无声基线相比,听力障碍与左侧颞上回的高激活相关。然而,兴趣区分析表明,在降级语音中,与清晰语音相比,HH 听者的双侧额叶区和岛叶皮层的激活程度较低,而 NH 听者的双侧额叶区和岛叶皮层的激活程度较高。听力障碍还与瞳孔反应和右侧额叶下回激活之间的关系较弱有关。总体而言,退化的语音比清晰的语音引起更高的额叶激活:结论:当语音降级时,与注意力和认知控制过程相关的脑区可能会被更多地调用,并且与瞳孔放大反应相关,但这种关系在高听力障碍听者中更弱。补充材料:https://doi.org/10.23641/asha.27162135。
{"title":"Hearing Impairment: Reduced Pupil Dilation Response and Frontal Activation During Degraded Speech Perception.","authors":"Adriana A Zekveld, Sophia E Kramer, Dirk J Heslenfeld, Niek J Versfeld, Chris Vriend","doi":"10.1044/2024_JSLHR-24-00017","DOIUrl":"10.1044/2024_JSLHR-24-00017","url":null,"abstract":"<p><strong>Purpose: </strong>A relevant aspect of listening is the effort required during speech processing, which can be assessed by pupillometry. Here, we assessed the pupil dilation response of normal-hearing (NH) and hard of hearing (HH) individuals during listening to clear sentences and masked or degraded sentences. We combined this assessment with functional magnetic resonance imaging (fMRI) to investigate the neural correlates of the pupil dilation response.</p><p><strong>Method: </strong>Seventeen NH participants (<i>M</i><sub>age</sub> = 46 years) were compared to 17 HH participants (<i>M</i><sub>age</sub> = 45 years) who were individually matched in age and educational level. Participants repeated sentences that were presented clearly, that were distorted, or that were masked. The sentence intelligibility level of masked and distorted sentences was 50% correct. Silent baseline trials were presented as well. Performance measures, pupil dilation responses, and fMRI data were acquired.</p><p><strong>Results: </strong>HH individuals had overall poorer speech reception than the NH participants, but not for noise-vocoded speech. In addition, an interaction effect was observed with smaller pupil dilation responses in HH than in NH listeners for the degraded speech conditions. Hearing impairment was associated with higher activation across conditions in the left superior temporal gyrus, as compared to the silent baseline. However, the region of interest analysis indicated lower activation during degraded speech relative to clear speech in bilateral frontal regions and the insular cortex, for HH compared to NH listeners. Hearing impairment was also associated with a weaker relation between the pupil response and activation in the right inferior frontal gyrus. Overall, degraded speech evoked higher frontal activation than clear speech.</p><p><strong>Conclusion: </strong>Brain areas associated with attentional and cognitive-control processes may be increasingly recruited when speech is degraded and are related to the pupil dilation response, but this relationship is weaker in HH listeners.</p><p><strong>Supplemental material: </strong>https://doi.org/10.23641/asha.27162135.</p>","PeriodicalId":51254,"journal":{"name":"Journal of Speech Language and Hearing Research","volume":" ","pages":"4549-4566"},"PeriodicalIF":2.2,"publicationDate":"2024-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142407204","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
FluencyBank Timestamped: An Updated Data Set for Disfluency Detection and Automatic Intended Speech Recognition. 流利度数据库(FluencyBank)时间戳:用于流畅性检测和自动意图语音识别的最新数据集。
IF 2.2 2区 医学 Q1 AUDIOLOGY & SPEECH-LANGUAGE PATHOLOGY Pub Date : 2024-11-07 Epub Date: 2024-10-08 DOI: 10.1044/2024_JSLHR-24-00070
Amrit Romana, Minxue Niu, Matthew Perez, Emily Mower Provost

Purpose: This work introduces updated transcripts, disfluency annotations, and word timings for FluencyBank, which we refer to as FluencyBank Timestamped. This data set will enable the thorough analysis of how speech processing models (such as speech recognition and disfluency detection models) perform when evaluated with typical speech versus speech from people who stutter (PWS).

Method: We update the FluencyBank data set, which includes audio recordings from adults who stutter, to explore the robustness of speech processing models. Our update (semi-automated with manual review) includes new transcripts with timestamps and disfluency labels corresponding to each token in the transcript. Our disfluency labels capture typical disfluencies (filled pauses, repetitions, revisions, and partial words), and we explore how speech model performance compares for Switchboard (typical speech) and FluencyBank Timestamped. We present benchmarks for three speech tasks: intended speech recognition, text-based disfluency detection, and audio-based disfluency detection. For the first task, we evaluate how well Whisper performs for intended speech recognition (i.e., transcribing speech without disfluencies). For the next tasks, we evaluate how well a Bidirectional Embedding Representations from Transformers (BERT) text-based model and a Whisper audio-based model perform for disfluency detection. We select these models, BERT and Whisper, as they have shown high accuracies on a broad range of tasks in their language and audio domains, respectively.

Results: For the transcription task, we calculate an intended speech word error rate (isWER) between the model's output and the speaker's intended speech (i.e., speech without disfluencies). We find isWER is comparable between Switchboard and FluencyBank Timestamped, but that Whisper transcribes filled pauses and partial words at higher rates in the latter data set. Within FluencyBank Timestamped, isWER increases with stuttering severity. For the disfluency detection tasks, we find the models detect filled pauses, revisions, and partial words relatively well in FluencyBank Timestamped, but performance drops substantially for repetitions because the models are unable to generalize to the different types of repetitions (e.g., multiple repetitions and sound repetitions) from PWS. We hope that FluencyBank Timestamped will allow researchers to explore closing performance gaps between typical speech and speech from PWS.

Conclusions: Our analysis shows that there are gaps in speech recognition and disfluency detection performance between typical speech and speech from PWS. We hope that FluencyBank Timestamped will contribute to more advancements in training robust speech processing models.

目的:这项工作介绍了流利说数据库的最新录音誊本、不流利注释和单词定时,我们称之为流利说数据库时间戳。该数据集将有助于全面分析语音处理模型(如语音识别和不流利检测模型)在评估典型语音和口吃患者(PWS)语音时的表现:我们更新了 FluencyBank 数据集,其中包括口吃成年人的录音,以探索语音处理模型的稳健性。我们的更新(半自动化,人工审核)包括带有时间戳的新记录誊本和与记录誊本中每个标记相对应的不流利标签。我们的不流畅标签捕捉了典型的不流畅现象(填充停顿、重复、修改和部分词语),我们探讨了 Switchboard(典型语音)和 FluencyBank Timestamped 的语音模型性能比较。我们提供了三项语音任务的基准:意图语音识别、基于文本的不流畅检测和基于音频的不流畅检测。在第一项任务中,我们评估了 Whisper 在预期语音识别(即转录无断句语音)方面的表现。在接下来的任务中,我们将评估基于转换器的双向嵌入表征(BERT)文本模型和基于 Whisper 音频模型在不流畅语句检测中的表现。我们选择 BERT 和 Whisper 这两个模型,是因为它们分别在其语言和音频领域的大量任务中表现出了很高的准确率:在转录任务中,我们计算了模型输出与说话人预期语音(即不含不流利词语的语音)之间的预期语音词语错误率(isWER)。我们发现,Switchboard 和 FluencyBank Timestamped 的 isWER 不相上下,但在后者的数据集中,Whisper 转录填充停顿和不完整单词的比率更高。在 FluencyBank Timestamped 中,isWER 会随着口吃严重程度的增加而增加。对于不流利检测任务,我们发现在流利库 Timestamped 中,模型对填充停顿、修订和部分词语的检测效果相对较好,但对重复的检测效果则大幅下降,因为模型无法泛化到 PWS 中不同类型的重复(如多重重复和声音重复)。我们希望,FluencyBank Timestamped 能让研究人员探索如何缩小典型语音和 PWS 语音之间的性能差距:我们的分析表明,典型语音和来自 PWS 的语音在语音识别和不流畅检测性能方面存在差距。我们希望,FluencyBank Timestamped 将有助于在训练强大的语音处理模型方面取得更多进展。
{"title":"FluencyBank Timestamped: An Updated Data Set for Disfluency Detection and Automatic Intended Speech Recognition.","authors":"Amrit Romana, Minxue Niu, Matthew Perez, Emily Mower Provost","doi":"10.1044/2024_JSLHR-24-00070","DOIUrl":"10.1044/2024_JSLHR-24-00070","url":null,"abstract":"<p><strong>Purpose: </strong>This work introduces updated transcripts, disfluency annotations, and word timings for FluencyBank, which we refer to as FluencyBank Timestamped. This data set will enable the thorough analysis of how speech processing models (such as speech recognition and disfluency detection models) perform when evaluated with typical speech versus speech from people who stutter (PWS).</p><p><strong>Method: </strong>We update the FluencyBank data set, which includes audio recordings from adults who stutter, to explore the robustness of speech processing models. Our update (semi-automated with manual review) includes new transcripts with timestamps and disfluency labels corresponding to each token in the transcript. Our disfluency labels capture typical disfluencies (filled pauses, repetitions, revisions, and partial words), and we explore how speech model performance compares for Switchboard (typical speech) and FluencyBank Timestamped. We present benchmarks for three speech tasks: intended speech recognition, text-based disfluency detection, and audio-based disfluency detection. For the first task, we evaluate how well Whisper performs for intended speech recognition (i.e., transcribing speech without disfluencies). For the next tasks, we evaluate how well a Bidirectional Embedding Representations from Transformers (BERT) text-based model and a Whisper audio-based model perform for disfluency detection. We select these models, BERT and Whisper, as they have shown high accuracies on a broad range of tasks in their language and audio domains, respectively.</p><p><strong>Results: </strong>For the transcription task, we calculate an intended speech word error rate (isWER) between the model's output and the speaker's intended speech (i.e., speech without disfluencies). We find isWER is comparable between Switchboard and FluencyBank Timestamped, but that Whisper transcribes filled pauses and partial words at higher rates in the latter data set. Within FluencyBank Timestamped, isWER increases with stuttering severity. For the disfluency detection tasks, we find the models detect filled pauses, revisions, and partial words relatively well in FluencyBank Timestamped, but performance drops substantially for repetitions because the models are unable to generalize to the different types of repetitions (e.g., multiple repetitions and sound repetitions) from PWS. We hope that FluencyBank Timestamped will allow researchers to explore closing performance gaps between typical speech and speech from PWS.</p><p><strong>Conclusions: </strong>Our analysis shows that there are gaps in speech recognition and disfluency detection performance between typical speech and speech from PWS. We hope that FluencyBank Timestamped will contribute to more advancements in training robust speech processing models.</p>","PeriodicalId":51254,"journal":{"name":"Journal of Speech Language and Hearing Research","volume":" ","pages":"4203-4215"},"PeriodicalIF":2.2,"publicationDate":"2024-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142395056","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Journal of Speech Language and Hearing Research
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1