首页 > 最新文献

Journal of Speech Language and Hearing Research最新文献

英文 中文
Reliability and Diagnostic Accuracy of Semi-Automated and Automated Acoustic Quantification of Vocal Tremor Characteristics. 半自动化和自动化声量化声带震颤特征的可靠性和诊断准确性。
IF 2.2 2区 医学 Q1 AUDIOLOGY & SPEECH-LANGUAGE PATHOLOGY Pub Date : 2025-06-05 Epub Date: 2025-05-05 DOI: 10.1044/2025_JSLHR-24-00467
Youri Maryn, Kaitlyn Dwenger, Sidney Kaufmann, Julie Barkmeier-Kraemer

Purpose: This study compared three methods of acoustic algorithm-supported extraction and analysis of vocal tremor properties (i.e., rate, extent, and regularity of intensity level and fundamental frequency modulation): (a) visual perception and manual data extraction, (b) semi-automated data extraction, and (c) fully automated data extraction.

Method: Forty-five midvowel sustained [a:] and [i:] audio recordings were collected as part of a scientific project to learn about the physiologic substrates of vocal tremor. This convenience data set contained vowels with a representative variety in vocal tremor severity. First, the vocal tremor properties in intensity level and fundamental frequency tracks were visually inspected and manually measured using Praat software. Second, the vocal tremor properties were determined using two Praat scripts: automated with the script of Maryn et al. (2019) and semi-automated with an adjusted version of this script to enable the user to intervene with the signal processing. The reliability of manual vocal tremor property measurement was assessed using the intraclass correlation coefficient. The properties as measured with the two scripts (automated vs. semi-automated) were compared with the manually determined properties using correlation and diagnostic accuracy statistical methods.

Results: With intraclass correlation coefficients between .770 and .914, the reliability of the manual method was acceptable. The semi-automated method correlated with manual property measures better and was more accurate in diagnosing vocal tremor than the automated method.

Discussion: Manual acoustic measurement of vocal tremor properties can be laborious and time-consuming. Automated or semi-automated acoustic methods may improve efficiency in vocal tremor property measurement in clinical as well as research settings. Although both Praat script-supported methods in this study yielded acceptable validity with the manual data measurements as a referent, the semi-automated method showed the best outcomes.

Supplemental material: https://doi.org/10.23641/asha.28873088.

目的:本研究比较了三种声学算法支持的声音震颤特性(即强度水平和基频调制的频率、程度和规律性)的提取和分析方法:(a)视觉感知和人工数据提取,(b)半自动数据提取,(c)全自动数据提取。方法:作为一个科学项目的一部分,收集了45段中音持续[a:]和[i:]录音,以了解声带震颤的生理基础。这个方便的数据集包含元音在声带震颤的严重程度具有代表性的变化。首先,使用Praat软件目视检查和手动测量强度水平和基频轨道的声带震颤特性。其次,使用两种Praat脚本确定声音震颤特性:使用Maryn等人(2019)的脚本进行自动化测试,使用该脚本的调整版本进行半自动测试,使用户能够干预信号处理。用类内相关系数评价人工声带震颤特性测量的可靠性。使用相关性和诊断准确性统计方法,将使用两个脚本(自动化与半自动)测量的属性与手动确定的属性进行比较。结果:手工方法的类内相关系数为0.770 ~ 0.914,信度可接受。半自动化方法与人工测量的相关性较好,诊断声带震颤的准确度高于自动化方法。讨论:人工声学测量声带震颤特性既费力又耗时。自动化或半自动声学方法可以提高临床和研究环境中声带震颤特性测量的效率。尽管在本研究中,两种Praat脚本支持的方法都以人工数据测量为参照,产生了可接受的效度,但半自动方法显示出最好的结果。补充资料:https://doi.org/10.23641/asha.28873088。
{"title":"Reliability and Diagnostic Accuracy of Semi-Automated and Automated Acoustic Quantification of Vocal Tremor Characteristics.","authors":"Youri Maryn, Kaitlyn Dwenger, Sidney Kaufmann, Julie Barkmeier-Kraemer","doi":"10.1044/2025_JSLHR-24-00467","DOIUrl":"10.1044/2025_JSLHR-24-00467","url":null,"abstract":"<p><strong>Purpose: </strong>This study compared three methods of acoustic algorithm-supported extraction and analysis of vocal tremor properties (i.e., rate, extent, and regularity of intensity level and fundamental frequency modulation): (a) visual perception and manual data extraction, (b) semi-automated data extraction, and (c) fully automated data extraction.</p><p><strong>Method: </strong>Forty-five midvowel sustained [a:] and [i:] audio recordings were collected as part of a scientific project to learn about the physiologic substrates of vocal tremor. This convenience data set contained vowels with a representative variety in vocal tremor severity. First, the vocal tremor properties in intensity level and fundamental frequency tracks were visually inspected and manually measured using Praat software. Second, the vocal tremor properties were determined using two Praat scripts: automated with the script of Maryn et al. (2019) and semi-automated with an adjusted version of this script to enable the user to intervene with the signal processing. The reliability of manual vocal tremor property measurement was assessed using the intraclass correlation coefficient. The properties as measured with the two scripts (automated vs. semi-automated) were compared with the manually determined properties using correlation and diagnostic accuracy statistical methods.</p><p><strong>Results: </strong>With intraclass correlation coefficients between .770 and .914, the reliability of the manual method was acceptable. The semi-automated method correlated with manual property measures better and was more accurate in diagnosing vocal tremor than the automated method.</p><p><strong>Discussion: </strong>Manual acoustic measurement of vocal tremor properties can be laborious and time-consuming. Automated or semi-automated acoustic methods may improve efficiency in vocal tremor property measurement in clinical as well as research settings. Although both Praat script-supported methods in this study yielded acceptable validity with the manual data measurements as a referent, the semi-automated method showed the best outcomes.</p><p><strong>Supplemental material: </strong>https://doi.org/10.23641/asha.28873088.</p>","PeriodicalId":51254,"journal":{"name":"Journal of Speech Language and Hearing Research","volume":" ","pages":"2721-2740"},"PeriodicalIF":2.2,"publicationDate":"2025-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12173159/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144042893","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Contributions of Behavioral and Electrophysiological Spectrotemporal Processing to the Perception of Degraded Speech in Younger and Older Adults. 行为和电生理光谱处理对年轻人和老年人退化言语感知的贡献。
IF 2.2 2区 医学 Q1 AUDIOLOGY & SPEECH-LANGUAGE PATHOLOGY Pub Date : 2025-06-05 Epub Date: 2025-05-15 DOI: 10.1044/2025_JSLHR-24-00667
Bruna S Mussoi, A'Diva Warren, Jordin Benedict, Serena Sereki, Julia Jones Huyck

Purpose: The aim of this study was to evaluate (a) the effect of aging on spectral and temporal resolution, as measured both behaviorally and electrophysiologically, and (b) the contributions of spectral and temporal resolution and cognition to speech perception in younger and older adults.

Method: Eighteen younger and 18 older listeners with normal hearing or no more than mild-moderate hearing loss participated in this cross-sectional study. Speech recognition was assessed with the QuickSIN test and six-band noise-vocoded sentences. Frequency discrimination, temporal interval discrimination, and gap detection thresholds were obtained using a three-alternative forced-choice task. Cortical auditory evoked potentials were recorded in response to tonal frequency changes and to gaps in noise. Cognitive testing included nonverbal reasoning, vocabulary, working memory, and processing speed.

Results: There were age-related declines on many outcome measures, including speech perception in noise, cognition (nonverbal reasoning, processing speed), behavioral gap detection thresholds, and neural correlates of spectral and temporal processing (smaller P1 amplitudes and prolonged P2 latencies in response to frequency change; smaller N1-P2 amplitudes and longer P1, N1, P2 latencies to temporal gaps). Hearing thresholds and neural processing of spectral and temporal information were the main predictors of degraded speech recognition performance, in addition to cognition and perceptual learning. These factors accounted for 58% of the variability on the QuickSIN test and 41% of variability on the noise-vocoded speech.

Conclusions: The results confirm and extend previous work demonstrating age-related declines in gap detection, cognition, and neural processing of spectral and temporal features of sounds. Neural measures of spectral and temporal processing were better predictors of speech perception than behavioral ones.

Supplemental material: https://doi.org/10.23641/asha.28883711.

目的:本研究的目的是评估(a)年龄对频谱和时间分辨率的影响,通过行为和电生理测量,以及(b)频谱和时间分辨率和认知对年轻人和老年人言语感知的贡献。方法:18名听力正常或不超过轻中度听力损失的年轻人和18名老年人参加了这项横断面研究。使用QuickSIN测试和六波段噪声语音编码句子来评估语音识别。频率判别、时间间隔判别和间隙检测阈值采用三选项强迫选择任务。记录皮层听觉诱发电位对音调频率变化和噪声间隙的反应。认知测试包括非语言推理、词汇、工作记忆和处理速度。结果:在噪声环境下的语音感知、认知(非语言推理、处理速度)、行为间隙检测阈值、频谱和时间处理的神经相关指标(频率变化导致P1幅值减小、P2潜伏期延长)等多项结果指标均出现与年龄相关的下降;N1-P2振幅较小,P1、N1、P2潜伏期较长)。除了认知和感知学习外,听力阈值和频谱和时间信息的神经处理是语音识别性能下降的主要预测因素。这些因素在QuickSIN测试中占58%的可变性,在噪声编码语音中占41%的可变性。结论:研究结果证实并扩展了先前的研究成果,证明了声音的间隔检测、认知和频谱和时间特征的神经处理与年龄有关。频谱和时间加工的神经测量比行为测量更能预测言语感知。补充资料:https://doi.org/10.23641/asha.28883711。
{"title":"Contributions of Behavioral and Electrophysiological Spectrotemporal Processing to the Perception of Degraded Speech in Younger and Older Adults.","authors":"Bruna S Mussoi, A'Diva Warren, Jordin Benedict, Serena Sereki, Julia Jones Huyck","doi":"10.1044/2025_JSLHR-24-00667","DOIUrl":"10.1044/2025_JSLHR-24-00667","url":null,"abstract":"<p><strong>Purpose: </strong>The aim of this study was to evaluate (a) the effect of aging on spectral and temporal resolution, as measured both behaviorally and electrophysiologically, and (b) the contributions of spectral and temporal resolution and cognition to speech perception in younger and older adults.</p><p><strong>Method: </strong>Eighteen younger and 18 older listeners with normal hearing or no more than mild-moderate hearing loss participated in this cross-sectional study. Speech recognition was assessed with the QuickSIN test and six-band noise-vocoded sentences. Frequency discrimination, temporal interval discrimination, and gap detection thresholds were obtained using a three-alternative forced-choice task. Cortical auditory evoked potentials were recorded in response to tonal frequency changes and to gaps in noise. Cognitive testing included nonverbal reasoning, vocabulary, working memory, and processing speed.</p><p><strong>Results: </strong>There were age-related declines on many outcome measures, including speech perception in noise, cognition (nonverbal reasoning, processing speed), behavioral gap detection thresholds, and neural correlates of spectral and temporal processing (smaller P1 amplitudes and prolonged P2 latencies in response to frequency change; smaller N1-P2 amplitudes and longer P1, N1, P2 latencies to temporal gaps). Hearing thresholds and neural processing of spectral and temporal information were the main predictors of degraded speech recognition performance, in addition to cognition and perceptual learning. These factors accounted for 58% of the variability on the QuickSIN test and 41% of variability on the noise-vocoded speech.</p><p><strong>Conclusions: </strong>The results confirm and extend previous work demonstrating age-related declines in gap detection, cognition, and neural processing of spectral and temporal features of sounds. Neural measures of spectral and temporal processing were better predictors of speech perception than behavioral ones.</p><p><strong>Supplemental material: </strong>https://doi.org/10.23641/asha.28883711.</p>","PeriodicalId":51254,"journal":{"name":"Journal of Speech Language and Hearing Research","volume":" ","pages":"2992-3010"},"PeriodicalIF":2.2,"publicationDate":"2025-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12510371/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144081540","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Right-Hemispheric White Matter Organization Is Associated With Speech Timing in Autistic Children. 自闭症儿童右半脑白质组织与语言时间有关。
IF 2.2 2区 医学 Q1 AUDIOLOGY & SPEECH-LANGUAGE PATHOLOGY Pub Date : 2025-06-05 Epub Date: 2025-05-19 DOI: 10.1044/2025_JSLHR-24-00548
Kelsey E Davison, Talia Liu, Rebecca M Belisle, Tyler K Perrachione, Zhenghan Qi, John D E Gabrieli, Helen Tager-Flusberg, Jennifer Zuk

Purpose: Converging research suggests that speech timing, including altered rate and pausing when speaking, can distinguish autistic individuals from nonautistic peers. Although speech timing can impact effective social communication, it remains unclear what mechanisms underlie individual differences in speech timing in autism.

Method: The present study examined the organization of speech- and language-related neural pathways in relation to speech timing in autistic and nonautistic children (24 autistic children, 24 nonautistic children [ages: 5-17 years]). Audio recordings from a naturalistic language sampling task (via narrative generation) were transcribed to extract speech timing features (speech rate, pause duration). White matter organization (as indicated by fractional anisotropy [FA]) was estimated for key tracts bilaterally (arcuate fasciculus, superior longitudinal fasciculus [SLF], inferior longitudinal fasciculus [ILF], frontal aslant tract [FAT]).

Results: Results indicate associations between speech timing and right-hemispheric white matter organization (FA in the right ILF and FAT) were specific to autistic children and not observed among nonautistic controls. Among nonautistic children, associations with speech timing were specific to the left hemisphere (FA in the left SLF).

Conclusion: Overall, these findings enhance understanding of the neural architecture influencing speech timing in autistic children and, thus, carry implications for understanding potential neural mechanisms underlying speech timing differences in autism.

Supplemental material: https://doi.org/10.23641/asha.28934432.

目的:聚合研究表明,说话时间,包括说话时的语速变化和停顿,可以区分自闭症个体和非自闭症同伴。虽然言语时间可以影响有效的社会交流,但自闭症患者言语时间的个体差异背后的机制尚不清楚。方法:本研究对自闭症儿童和非自闭症儿童(24名自闭症儿童和24名非自闭症儿童[年龄:5-17岁])的言语和语言相关神经通路的组织进行了研究。自然语言采样任务(通过叙事生成)的音频记录被转录以提取语音时序特征(语音速率、停顿时间)。双侧关键束(弓形束、上纵束[SLF]、下纵束[ILF]、额斜束[FAT])的白质组织(由分数各向异性[FA]表示)被估计。结果:结果表明,言语时间和右半球白质组织(右半球ILF和FAT中的FA)之间的联系是自闭症儿童特有的,而在非自闭症对照组中没有观察到。在非自闭症儿童中,与言语时间的关联仅存在于左半球(左SLF中的FA)。结论:总的来说,这些发现加强了对自闭症儿童影响言语时间的神经结构的理解,从而为理解自闭症儿童言语时间差异的潜在神经机制提供了线索。补充资料:https://doi.org/10.23641/asha.28934432。
{"title":"Right-Hemispheric White Matter Organization Is Associated With Speech Timing in Autistic Children.","authors":"Kelsey E Davison, Talia Liu, Rebecca M Belisle, Tyler K Perrachione, Zhenghan Qi, John D E Gabrieli, Helen Tager-Flusberg, Jennifer Zuk","doi":"10.1044/2025_JSLHR-24-00548","DOIUrl":"10.1044/2025_JSLHR-24-00548","url":null,"abstract":"<p><strong>Purpose: </strong>Converging research suggests that speech timing, including altered rate and pausing when speaking, can distinguish autistic individuals from nonautistic peers. Although speech timing can impact effective social communication, it remains unclear what mechanisms underlie individual differences in speech timing in autism.</p><p><strong>Method: </strong>The present study examined the organization of speech- and language-related neural pathways in relation to speech timing in autistic and nonautistic children (24 autistic children, 24 nonautistic children [ages: 5-17 years]). Audio recordings from a naturalistic language sampling task (via narrative generation) were transcribed to extract speech timing features (speech rate, pause duration). White matter organization (as indicated by fractional anisotropy [FA]) was estimated for key tracts bilaterally (arcuate fasciculus, superior longitudinal fasciculus [SLF], inferior longitudinal fasciculus [ILF], frontal aslant tract [FAT]).</p><p><strong>Results: </strong>Results indicate associations between speech timing and right-hemispheric white matter organization (FA in the right ILF and FAT) were specific to autistic children and not observed among nonautistic controls. Among nonautistic children, associations with speech timing were specific to the left hemisphere (FA in the left SLF).</p><p><strong>Conclusion: </strong>Overall, these findings enhance understanding of the neural architecture influencing speech timing in autistic children and, thus, carry implications for understanding potential neural mechanisms underlying speech timing differences in autism.</p><p><strong>Supplemental material: </strong>https://doi.org/10.23641/asha.28934432.</p>","PeriodicalId":51254,"journal":{"name":"Journal of Speech Language and Hearing Research","volume":" ","pages":"2685-2699"},"PeriodicalIF":2.2,"publicationDate":"2025-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12173158/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144103018","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Development of Urdu Speech Audiometry Material for the Deaf and Hard of Hearing Community. 聋人及重听群体乌尔都语语音测听材料的研制。
IF 2.2 2区 医学 Q1 AUDIOLOGY & SPEECH-LANGUAGE PATHOLOGY Pub Date : 2025-06-05 Epub Date: 2025-05-01 DOI: 10.1044/2025_JSLHR-24-00118
Sahar Rauf, Sarmad Hussain, Anam Amin, Shumaila Tanveer, Asma Jabeen

Purpose: This study aimed to develop standardized Urdu speech materials for assessing speech recognition threshold (SRT) and word recognition score (WRS) for clinical use in Pakistan.

Method: The development of Urdu speech materials followed four key parameters: phonemic coverage, phonetic dissimilarity, familiarity with the participants, and homogeneity in terms of audibility. Bisyllabic words for SRT measurement and monosyllabic words for WRS measurement were selected. The most familiar 50 spondee words and 50 monosyllabic words were selected for the evaluation of SRT and WRS, respectively, in children with normal hearing. Thirty spondee words and 34 monosyllabic words with relatively steep and homogeneous psychometric function slopes were included in the final lists.

Results: The mean psychometric function slope at the 50% threshold for the 30 selected spondee words was found to be 9.1%/dB, and for 34 monosyllabic words, it was found to be 6%/dB.

Conclusions: Bisyllabic words for SRT measurement and monosyllabic words for WRS measurement were successfully developed and evaluated in Lahore, Pakistan. There is a need for the development of speech audiometry materials in other Pakistani languages.

目的:本研究旨在为巴基斯坦临床应用开发用于评估语音识别阈值(SRT)和单词识别评分(WRS)的标准化乌尔都语语音材料。方法:乌尔都语语音材料的开发遵循四个关键参数:音位覆盖、语音不相似、对参与者的熟悉程度和可听性的同质性。SRT测量选择双音节词,WRS测量选择单音节词。选取听力正常儿童最熟悉的50个自发词和50个单音节词分别进行SRT和WRS评价。30个自发词和34个单音节词的心理函数斜率相对陡且均匀。结果:30个自发性词在50%阈值处的平均心理测量函数斜率为9.1%/dB, 34个单音节词在50%阈值处的平均心理测量函数斜率为6%/dB。结论:巴基斯坦拉合尔成功开发并评价了用于SRT测量的双音节词和用于WRS测量的单音节词。有必要开发其他巴基斯坦语言的语音测听材料。
{"title":"Development of Urdu Speech Audiometry Material for the Deaf and Hard of Hearing Community.","authors":"Sahar Rauf, Sarmad Hussain, Anam Amin, Shumaila Tanveer, Asma Jabeen","doi":"10.1044/2025_JSLHR-24-00118","DOIUrl":"10.1044/2025_JSLHR-24-00118","url":null,"abstract":"<p><strong>Purpose: </strong>This study aimed to develop standardized Urdu speech materials for assessing speech recognition threshold (SRT) and word recognition score (WRS) for clinical use in Pakistan.</p><p><strong>Method: </strong>The development of Urdu speech materials followed four key parameters: phonemic coverage, phonetic dissimilarity, familiarity with the participants, and homogeneity in terms of audibility. Bisyllabic words for SRT measurement and monosyllabic words for WRS measurement were selected. The most familiar 50 spondee words and 50 monosyllabic words were selected for the evaluation of SRT and WRS, respectively, in children with normal hearing. Thirty spondee words and 34 monosyllabic words with relatively steep and homogeneous psychometric function slopes were included in the final lists.</p><p><strong>Results: </strong>The mean psychometric function slope at the 50% threshold for the 30 selected spondee words was found to be 9.1%/dB, and for 34 monosyllabic words, it was found to be 6%/dB.</p><p><strong>Conclusions: </strong>Bisyllabic words for SRT measurement and monosyllabic words for WRS measurement were successfully developed and evaluated in Lahore, Pakistan. There is a need for the development of speech audiometry materials in other Pakistani languages.</p>","PeriodicalId":51254,"journal":{"name":"Journal of Speech Language and Hearing Research","volume":" ","pages":"2900-2914"},"PeriodicalIF":2.2,"publicationDate":"2025-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144040327","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
What Influences Parenting Stress? Examining Parenting Stress and Self-Efficacy Across Groups of Children With Autism Spectrum Disorder, at Risk of Developmental Language Disorder, and With Typically Developing Language. 什么影响养育压力?研究自闭症谱系障碍、发育性语言障碍和典型发育性语言障碍儿童群体的父母压力和自我效能感。
IF 2.2 2区 医学 Q1 AUDIOLOGY & SPEECH-LANGUAGE PATHOLOGY Pub Date : 2025-06-05 Epub Date: 2025-05-05 DOI: 10.1044/2025_JSLHR-24-00672
Merve Dilbaz-Gürsoy, Ayşın Noyan-Erbaş, Halime Tuna Çak Esen, Ayşen Köse, Esra Özcebe

Purpose: The purpose of this study was to examine whether there are differences in parenting stress levels and self-efficacy among children with autism spectrum disorder (ASD), at risk of developmental language disorder (rDLD), and with typically developing language (TDL). The study also investigated the children's language abilities and/or behavioral problems as potential predictors of parents' levels of stress and self-efficacy.

Method: The study assessed children's language skills and behavioral problems as well as parental stress and self-efficacy in a sample of 2- to 4-year-old children with ASD (n = 35), rDLD (n = 35), and with TDL (n = 25).

Results: The findings of the study revealed that parents of children with ASD experienced the highest level of parenting stress related to child characteristics and the lowest level of self-efficacy, whereas parents of children rDLD had higher parenting stress compared to parents of children with TDL. Furthermore, although behavioral problems were shown to be a predictor that explains parenting stress in all groups, expressive language was identified as a predictor only in the rDLD group. While parental self-efficacy was also found to be predicted by expressive language in the TDL group, it was discovered that self-efficacy affected parenting stress in parents of children with ASD and rDLD.

Conclusions: These findings demonstrated that parental stress was a complex phenomenon impacted by several factors. This study may suggest the importance of interventions that aim to decrease parental stress and enhance self-efficacy, going beyond the children's language skills and behavioral problems.

目的:本研究的目的是探讨自闭症谱系障碍(ASD)、发展性语言障碍(rDLD)和典型发展性语言障碍(TDL)儿童的父母压力水平和自我效能感是否存在差异。该研究还调查了儿童的语言能力和/或行为问题,作为父母压力水平和自我效能的潜在预测因素。方法:本研究对2- 4岁ASD (n = 35)、rDLD (n = 35)和TDL (n = 25)患儿的语言技能和行为问题以及父母压力和自我效能感进行了评估。结果:研究结果显示,ASD患儿的父母与儿童特征相关的育儿压力水平最高,自我效能感水平最低,而rDLD患儿的父母的育儿压力高于TDL患儿的父母。此外,尽管行为问题被证明是所有群体中解释养育压力的预测因素,但表达性语言被确定为仅在rDLD组中预测因素。虽然在TDL组中,父母自我效能感也被发现可以通过表达语言预测,但研究发现,在ASD和rDLD儿童的父母中,自我效能感会影响养育压力。结论:父母压力是一个受多种因素影响的复杂现象。这项研究可能表明,旨在减轻父母压力和提高自我效能感的干预措施的重要性,而不仅仅是儿童的语言技能和行为问题。
{"title":"What Influences Parenting Stress? Examining Parenting Stress and Self-Efficacy Across Groups of Children With Autism Spectrum Disorder, at Risk of Developmental Language Disorder, and With Typically Developing Language.","authors":"Merve Dilbaz-Gürsoy, Ayşın Noyan-Erbaş, Halime Tuna Çak Esen, Ayşen Köse, Esra Özcebe","doi":"10.1044/2025_JSLHR-24-00672","DOIUrl":"10.1044/2025_JSLHR-24-00672","url":null,"abstract":"<p><strong>Purpose: </strong>The purpose of this study was to examine whether there are differences in parenting stress levels and self-efficacy among children with autism spectrum disorder (ASD), at risk of developmental language disorder (rDLD), and with typically developing language (TDL). The study also investigated the children's language abilities and/or behavioral problems as potential predictors of parents' levels of stress and self-efficacy.</p><p><strong>Method: </strong>The study assessed children's language skills and behavioral problems as well as parental stress and self-efficacy in a sample of 2- to 4-year-old children with ASD (<i>n</i> = 35), rDLD (<i>n</i> = 35), and with TDL (<i>n</i> = 25).</p><p><strong>Results: </strong>The findings of the study revealed that parents of children with ASD experienced the highest level of parenting stress related to child characteristics and the lowest level of self-efficacy, whereas parents of children rDLD had higher parenting stress compared to parents of children with TDL. Furthermore, although behavioral problems were shown to be a predictor that explains parenting stress in all groups, expressive language was identified as a predictor only in the rDLD group. While parental self-efficacy was also found to be predicted by expressive language in the TDL group, it was discovered that self-efficacy affected parenting stress in parents of children with ASD and rDLD.</p><p><strong>Conclusions: </strong>These findings demonstrated that parental stress was a complex phenomenon impacted by several factors. This study may suggest the importance of interventions that aim to decrease parental stress and enhance self-efficacy, going beyond the children's language skills and behavioral problems.</p>","PeriodicalId":51254,"journal":{"name":"Journal of Speech Language and Hearing Research","volume":" ","pages":"2837-2850"},"PeriodicalIF":2.2,"publicationDate":"2025-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144016327","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Temporal Sensitivity in Patients With Type 1 Diabetes Mellitus and Insights Into Their Everyday Auditory Performance. 1型糖尿病患者的时间敏感性及其日常听觉表现。
IF 2.2 2区 医学 Q1 AUDIOLOGY & SPEECH-LANGUAGE PATHOLOGY Pub Date : 2025-06-05 Epub Date: 2025-04-29 DOI: 10.1044/2025_JSLHR-24-00554
Ozlem Topcu, Süleyman Nahit Sendur, Hilal Dincer D'Alessandro, Merve Ozbal Batuk, Gonca Sennaroglu

Purpose: This study aimed to investigate the effects of Type 1 diabetes mellitus (T1DM) on low-frequency (LF) pitch and speech-in-noise perception linked to temporal sensitivity and everyday auditory performance. The relationships between these outcomes and potential confounders, such as diabetes duration, glycemic control, and neuropathy, were also examined.

Method: The participants consisted of 18 young patients with T1DM. They were matched with 18 healthy controls based on age, gender, and audiometric thresholds (up to 20 kHz). Measurements included behavioral measures of temporal sensitivity using the low-pass-filtered Word Stress Pattern (WSP-LPF) test and the Hearing in Noise Test (HINT), as well as self-reported measure using the Speech, Spatial and Qualities of Hearing Scale.

Results: Patients with T1DM showed significantly poorer performance on both the WSP-LPF (p < .001), and HINT (p = .004) tests compared to healthy controls. Specifically, patients with T1DM showed impaired perception of lexical stress cued by LF pitch and required higher signal-to-noise ratios to effectively perceive speech in complex listening situations. Self-report measures indicated reduced hearing satisfaction in patients with T1DM (p = .001). Statistically significant correlations were found between WSP-LPF and diabetes duration (p = .021).

Conclusions: The present findings reveal that T1DM negatively affects the perception of lexical stress and speech-in-noise performance, reflecting disruptions in temporal sensitivity. These impairments are present even in patients with normal audiometric thresholds, and addressing these deficits may be crucial for improving auditory function and developing targeted interventions.

目的:本研究旨在探讨1型糖尿病(T1DM)对低频(LF)音调和噪声中言语感知的影响,这些影响与时间敏感性和日常听觉表现有关。这些结果与潜在混杂因素(如糖尿病病程、血糖控制和神经病变)之间的关系也被检查。方法:研究对象为18例年轻T1DM患者。他们根据年龄、性别和听力阈值(高达20 kHz)与18名健康对照进行匹配。测量包括使用低通过滤单词重音模式(WSP-LPF)测试和噪音听力测试(HINT)的时间敏感性行为测量,以及使用语音、空间和听力质量量表的自我报告测量。结果:与健康对照组相比,T1DM患者在WSP-LPF (p < .001)和HINT (p = .004)测试中的表现均明显较差。具体而言,T1DM患者对低音高引起的词汇压力的感知受损,需要更高的信噪比才能在复杂的听力情况下有效感知语音。自我报告测量显示T1DM患者的听力满意度降低(p = .001)。WSP-LPF与糖尿病病程有统计学意义(p = 0.021)。结论:本研究结果表明,T1DM对词汇压力感知和语音噪音表现有负面影响,反映了时间敏感性的破坏。这些损伤甚至存在于听力阈值正常的患者中,解决这些缺陷可能对改善听觉功能和制定有针对性的干预措施至关重要。
{"title":"Temporal Sensitivity in Patients With Type 1 Diabetes Mellitus and Insights Into Their Everyday Auditory Performance.","authors":"Ozlem Topcu, Süleyman Nahit Sendur, Hilal Dincer D'Alessandro, Merve Ozbal Batuk, Gonca Sennaroglu","doi":"10.1044/2025_JSLHR-24-00554","DOIUrl":"10.1044/2025_JSLHR-24-00554","url":null,"abstract":"<p><strong>Purpose: </strong>This study aimed to investigate the effects of Type 1 diabetes mellitus (T1DM) on low-frequency (LF) pitch and speech-in-noise perception linked to temporal sensitivity and everyday auditory performance. The relationships between these outcomes and potential confounders, such as diabetes duration, glycemic control, and neuropathy, were also examined.</p><p><strong>Method: </strong>The participants consisted of 18 young patients with T1DM. They were matched with 18 healthy controls based on age, gender, and audiometric thresholds (up to 20 kHz). Measurements included behavioral measures of temporal sensitivity using the low-pass-filtered Word Stress Pattern (WSP-LPF) test and the Hearing in Noise Test (HINT), as well as self-reported measure using the Speech, Spatial and Qualities of Hearing Scale.</p><p><strong>Results: </strong>Patients with T1DM showed significantly poorer performance on both the WSP-LPF (<i>p</i> < .001), and HINT (<i>p</i> = .004) tests compared to healthy controls. Specifically, patients with T1DM showed impaired perception of lexical stress cued by LF pitch and required higher signal-to-noise ratios to effectively perceive speech in complex listening situations. Self-report measures indicated reduced hearing satisfaction in patients with T1DM (<i>p</i> = .001). Statistically significant correlations were found between WSP-LPF and diabetes duration (<i>p</i> = .021).</p><p><strong>Conclusions: </strong>The present findings reveal that T1DM negatively affects the perception of lexical stress and speech-in-noise performance, reflecting disruptions in temporal sensitivity. These impairments are present even in patients with normal audiometric thresholds, and addressing these deficits may be crucial for improving auditory function and developing targeted interventions.</p>","PeriodicalId":51254,"journal":{"name":"Journal of Speech Language and Hearing Research","volume":" ","pages":"2915-2928"},"PeriodicalIF":2.2,"publicationDate":"2025-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144027915","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Investigating the Effects of Speaking Rate on Spoken Language Processing in Children Who Are Deaf and Hard of Hearing. 语速对聋儿和听力障碍儿童口语加工影响的研究。
IF 2.2 2区 医学 Q1 AUDIOLOGY & SPEECH-LANGUAGE PATHOLOGY Pub Date : 2025-06-05 Epub Date: 2025-05-01 DOI: 10.1044/2025_JSLHR-24-00108
Rosanne Abrahamse, Titia Benders, Katherine Demuth, Nan Xu Rattanasone

Purpose: This study aimed to investigate how hearing loss affects (a) spoken language processing and (b) processing of faster speech in school-age children who are deaf and hard of hearing (DHH).

Method: Spoken language processing was compared in thirty-six 7- to 12-year-olds who are DHH and 31 peers with normal hearing using a word detection task. Children listened for a target word in sentences presented at a normal (4.5 syllables per second [syll./s]) versus fast (6.1 syll./s) speaking rate and pressed a key when they heard the word in the sentence. Response time was taken as an outcome measure. Relationships between working memory capacity, vocabulary size, and processing speed were also assessed.

Results: Children who are DHH were slower than their peers with normal hearing to detect words in sentences, but no evidence for a negative effect of speaking rate was observed. Furthermore, contrary to expectation, a larger working memory capacity was associated with slower spoken language processing, with effects stronger for younger children with smaller vocabulary sizes.

Conclusions: Regardless of speaking rate, children who are DHH may be at risk for delays in spoken language processing relative to peers with normal hearing. These delays may have consequences for their access to learning and communication in spoken forms in everyday environments, which contain additional challenges such as background noise, competing talkers, and speaker variability.

Supplemental material: https://doi.org/10.23641/asha.28842611.

目的:本研究旨在探讨听力损失对失聪和听力障碍学龄儿童(DHH)的影响(a)口语加工和(b)快速言语加工。方法:采用单词检测的方法,对36例7 ~ 12岁的DHH患儿和31例听力正常的同龄人的口语加工过程进行比较。孩子们以正常(4.5个音节/秒)和快速(6.1个音节/秒)的速度听句子中的目标单词,并在听到句子中的单词时按下键。反应时间作为结果度量。工作记忆容量、词汇量和处理速度之间的关系也被评估。结果:DHH患儿对句子中单词的识别速度比正常听力患儿慢,但未发现言语速度对其有负面影响。此外,与预期相反,更大的工作记忆容量与更慢的口语处理有关,对词汇量较小的年幼儿童的影响更大。结论:无论说话速度如何,与听力正常的同龄人相比,DHH儿童在口语处理方面可能存在延迟的风险。这些延迟可能会影响他们在日常环境中以口语形式学习和交流的机会,其中包含额外的挑战,如背景噪音、说话者的竞争和说话者的变化。补充资料:https://doi.org/10.23641/asha.28842611。
{"title":"Investigating the Effects of Speaking Rate on Spoken Language Processing in Children Who Are Deaf and Hard of Hearing.","authors":"Rosanne Abrahamse, Titia Benders, Katherine Demuth, Nan Xu Rattanasone","doi":"10.1044/2025_JSLHR-24-00108","DOIUrl":"10.1044/2025_JSLHR-24-00108","url":null,"abstract":"<p><strong>Purpose: </strong>This study aimed to investigate how hearing loss affects (a) spoken language processing and (b) processing of faster speech in school-age children who are deaf and hard of hearing (DHH).</p><p><strong>Method: </strong>Spoken language processing was compared in thirty-six 7- to 12-year-olds who are DHH and 31 peers with normal hearing using a word detection task. Children listened for a target word in sentences presented at a normal (4.5 syllables per second [syll./s]) versus fast (6.1 syll./s) speaking rate and pressed a key when they heard the word in the sentence. Response time was taken as an outcome measure. Relationships between working memory capacity, vocabulary size, and processing speed were also assessed.</p><p><strong>Results: </strong>Children who are DHH were slower than their peers with normal hearing to detect words in sentences, but no evidence for a negative effect of speaking rate was observed. Furthermore, contrary to expectation, a larger working memory capacity was associated with slower spoken language processing, with effects stronger for younger children with smaller vocabulary sizes.</p><p><strong>Conclusions: </strong>Regardless of speaking rate, children who are DHH may be at risk for delays in spoken language processing relative to peers with normal hearing. These delays may have consequences for their access to learning and communication in spoken forms in everyday environments, which contain additional challenges such as background noise, competing talkers, and speaker variability.</p><p><strong>Supplemental material: </strong>https://doi.org/10.23641/asha.28842611.</p>","PeriodicalId":51254,"journal":{"name":"Journal of Speech Language and Hearing Research","volume":" ","pages":"2959-2977"},"PeriodicalIF":2.2,"publicationDate":"2025-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144057004","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Effects of Fundamental Frequency and Vocal Tract Resonance on Sentence Recognition in Noise. 基频和声道共振对噪声环境下句子识别的影响。
IF 2.2 2区 医学 Q1 AUDIOLOGY & SPEECH-LANGUAGE PATHOLOGY Pub Date : 2025-06-05 Epub Date: 2025-05-19 DOI: 10.1044/2025_JSLHR-24-00758
Jing Yang, Xianhui Wang, Victoria Costa, Li Xu

Purpose: This study examined the effects of change in a talker's sex-related acoustic properties (fundamental frequency [F0] and vocal tract resonance [VTR]) on speech recognition in noise.

Method: The stimuli were Hearing in Noise Test sentences, with the F0 and VTR of the original male talker manipulated into four conditions: low F0 and low VTR (LF0LVTR; i.e., the original recordings), low F0 and high VTR (LF0HVTR), high F0 and high VTR (HF0HVTR), and high F0 and low VTR (HF0LVTR). The listeners were 42 English-speaking, normal-hearing adults (21-31 years old). The sentences mixed with speech spectrum-shaped noise at various signal-to-noise ratios (i.e., -10, -5, 0, and +5 dB) were presented to the listeners for recognition.

Results: The results revealed no significant differences between the HF0HVTR and LF0LVTR conditions in sentence recognition performance and the estimated speech reception thresholds (SRTs). However, in the HF0LVTR and LF0HVTR conditions, the recognition performance was reduced, and the listeners showed significantly higher SRTs relative to those in the HF0HVTR and LF0LVTR conditions.

Conclusion: These findings indicate that male and female voices with matched F0 and VTR (e.g., LF0LVTR and HF0HVTR) yield equivalent speech recognition in noise, whereas voices with mismatched F0 and VTR may reduce intelligibility in noisy environments.

Supplemental material: https://doi.org/10.23641/asha.29052305.

目的:本研究考察了说话者性别相关声学特性(基频[F0]和声道共振[VTR])的变化对噪声环境下语音识别的影响。方法:以听力噪声测试句子为刺激物,将原男性说话者的F0和VTR分别设置为低F0和低VTR (lf0 - lvtr;即原始录音),低F0和高VTR (LF0HVTR),高F0和高VTR (HF0HVTR),高F0和低VTR (HF0LVTR)。听者为42名说英语、听力正常的成年人(21-31岁)。将不同信噪比(-10、-5、0、+5 dB)下混合了语音频谱型噪声的句子呈现给听者识别。结果:HF0HVTR组和LF0LVTR组在句子识别性能和预估语音接收阈值(SRTs)方面无显著差异。然而,在HF0LVTR和LF0HVTR条件下,听者的识别能力有所下降,其srt显著高于HF0HVTR和LF0LVTR条件下的听者。结论:这些研究结果表明,F0和VTR匹配的男声和女声(如LF0LVTR和HF0HVTR)在噪声环境下产生等效的语音识别,而F0和VTR不匹配的声音在噪声环境下可能会降低可理解性。补充资料:https://doi.org/10.23641/asha.29052305。
{"title":"Effects of Fundamental Frequency and Vocal Tract Resonance on Sentence Recognition in Noise.","authors":"Jing Yang, Xianhui Wang, Victoria Costa, Li Xu","doi":"10.1044/2025_JSLHR-24-00758","DOIUrl":"10.1044/2025_JSLHR-24-00758","url":null,"abstract":"<p><strong>Purpose: </strong>This study examined the effects of change in a talker's sex-related acoustic properties (fundamental frequency [<i>F</i>0] and vocal tract resonance [VTR]) on speech recognition in noise.</p><p><strong>Method: </strong>The stimuli were Hearing in Noise Test sentences, with the <i>F</i>0 and VTR of the original male talker manipulated into four conditions: low <i>F</i>0 and low VTR (L<sub><i>F</i>0</sub>L<sub>VTR</sub>; i.e., the original recordings), low <i>F</i>0 and high VTR (L<sub><i>F</i>0</sub>H<sub>VTR</sub>), high <i>F</i>0 and high VTR (H<sub><i>F</i>0</sub>H<sub>VTR</sub>), and high <i>F</i>0 and low VTR (H<sub><i>F</i>0</sub>L<sub>VTR</sub>). The listeners were 42 English-speaking, normal-hearing adults (21-31 years old). The sentences mixed with speech spectrum-shaped noise at various signal-to-noise ratios (i.e., -10, -5, 0, and +5 dB) were presented to the listeners for recognition.</p><p><strong>Results: </strong>The results revealed no significant differences between the H<sub><i>F</i>0</sub>H<sub>VTR</sub> and L<sub><i>F</i>0</sub>L<sub>VTR</sub> conditions in sentence recognition performance and the estimated speech reception thresholds (SRTs). However, in the H<sub><i>F</i>0</sub>L<sub>VTR</sub> and L<sub><i>F</i>0</sub>H<sub>VTR</sub> conditions, the recognition performance was reduced, and the listeners showed significantly higher SRTs relative to those in the H<sub><i>F</i>0</sub>H<sub>VTR</sub> and L<sub><i>F</i>0</sub>L<sub>VTR</sub> conditions.</p><p><strong>Conclusion: </strong>These findings indicate that male and female voices with matched <i>F</i>0 and VTR (e.g., L<sub><i>F</i>0</sub>L<sub>VTR</sub> and H<sub><i>F</i>0</sub>H<sub>VTR</sub>) yield equivalent speech recognition in noise, whereas voices with mismatched <i>F</i>0 and VTR may reduce intelligibility in noisy environments.</p><p><strong>Supplemental material: </strong>https://doi.org/10.23641/asha.29052305.</p>","PeriodicalId":51254,"journal":{"name":"Journal of Speech Language and Hearing Research","volume":" ","pages":"3011-3022"},"PeriodicalIF":2.2,"publicationDate":"2025-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144103013","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Assessing Perceptual Difficulty Across Speech Sound Categories and Contrasts to Optimize Minimal Pair Training. 评估语音类别和对比的感知困难以优化最小对训练。
IF 2.2 2区 医学 Q1 AUDIOLOGY & SPEECH-LANGUAGE PATHOLOGY Pub Date : 2025-06-05 Epub Date: 2025-04-29 DOI: 10.1044/2025_JSLHR-24-00254
Kristi Hendrickson, Nadine Lee, Elizabeth A Walker, Meaghan Foody, Philip Combiths

Purpose: Utilizing psycholinguistic methods, this article aims to ascertain the perceptual difficulty associated with distinguishing between different speech sound categories and individual contrasts within those categories, with the ultimate goal of informing the use of minimal pair contrasts in perceptual training.

Design: Using eye-tracking in the Visual World Paradigm, adults with normal hearing (N = 30) were presented with an auditory word and were required to identify the matching image from a selection of four options: the target word, two unrelated words, and a minimal pair competitor contrasting with the target word in word-final position in one of four categories (manner, place, voicing, nasality).

Results: We measured fixations to minimal pair competitors over time and found that manner and place competitors exhibited greater competition compared to voicing and nasality competitors. Notably, within manner competitors, substantial differences in discrimination difficulty were observed among individual contrasts.

Conclusions: Conventional views of speech sound perception have often grouped sounds into broad categories (manner, place, voicing, nasality), potentially overlooking the nuanced differences within these groupings, which significantly affect perception. This work is vital for advancing our understanding of speech perception and its mechanisms. Furthermore, this work will help to refine minimal pair treatment strategies in clinical contexts.

Supplemental material: https://doi.org/10.23641/asha.28848446.

目的:利用心理语言学的方法,旨在确定与区分不同语音类别和这些类别中的个体对比相关的感知困难,最终目的是为在感知训练中使用最小对对比提供信息。设计:使用视觉世界范式中的眼动追踪,向听力正常的成年人(N = 30)提供一个听觉词,并要求他们从四个选项中选择一个匹配的图像:目标词,两个不相关的词,以及在四个类别(方式,地点,声音,鼻音)中的一个与目标词在词尾位置形成对比的最小对竞争者。结果:随着时间的推移,我们测量了对最小配对竞争者的注视,发现方式和位置竞争者比声音和鼻音竞争者表现出更大的竞争。值得注意的是,在方式竞争者中,个体间的辨别难度存在显著差异。结论:语音感知的传统观点通常将声音分为大类(方式、地点、发声、鼻音),潜在地忽略了这些分组中细微的差异,这些差异会显著影响感知。这项工作对于提高我们对语音感知及其机制的理解至关重要。此外,这项工作将有助于在临床环境中完善最小配对治疗策略。补充资料:https://doi.org/10.23641/asha.28848446。
{"title":"Assessing Perceptual Difficulty Across Speech Sound Categories and Contrasts to Optimize Minimal Pair Training.","authors":"Kristi Hendrickson, Nadine Lee, Elizabeth A Walker, Meaghan Foody, Philip Combiths","doi":"10.1044/2025_JSLHR-24-00254","DOIUrl":"10.1044/2025_JSLHR-24-00254","url":null,"abstract":"<p><strong>Purpose: </strong>Utilizing psycholinguistic methods, this article aims to ascertain the perceptual difficulty associated with distinguishing between different speech sound categories and individual contrasts within those categories, with the ultimate goal of informing the use of minimal pair contrasts in perceptual training.</p><p><strong>Design: </strong>Using eye-tracking in the Visual World Paradigm, adults with normal hearing (<i>N</i> = 30) were presented with an auditory word and were required to identify the matching image from a selection of four options: the target word, two unrelated words, and a minimal pair competitor contrasting with the target word in word-final position in one of four categories (manner, place, voicing, nasality).</p><p><strong>Results: </strong>We measured fixations to minimal pair competitors over time and found that manner and place competitors exhibited greater competition compared to voicing and nasality competitors. Notably, within manner competitors, substantial differences in discrimination difficulty were observed among individual contrasts.</p><p><strong>Conclusions: </strong>Conventional views of speech sound perception have often grouped sounds into broad categories (manner, place, voicing, nasality), potentially overlooking the nuanced differences within these groupings, which significantly affect perception. This work is vital for advancing our understanding of speech perception and its mechanisms. Furthermore, this work will help to refine minimal pair treatment strategies in clinical contexts.</p><p><strong>Supplemental material: </strong>https://doi.org/10.23641/asha.28848446.</p>","PeriodicalId":51254,"journal":{"name":"Journal of Speech Language and Hearing Research","volume":" ","pages":"2945-2958"},"PeriodicalIF":2.2,"publicationDate":"2025-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12510373/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143994067","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The Significance of a Higher Prevalence of ADHD and ADHD Symptoms in Children Who Stutter. 口吃儿童ADHD和ADHD症状高发的意义
IF 2.2 2区 医学 Q1 AUDIOLOGY & SPEECH-LANGUAGE PATHOLOGY Pub Date : 2025-06-05 Epub Date: 2025-05-14 DOI: 10.1044/2025_JSLHR-24-00668
Bridget Walsh, Seth E Tichenor, Katelyn L Gerwin

Purpose: Research suggests that attention-deficit/hyperactivity disorder (ADHD) and its symptoms occur more frequently in individuals who stutter. The purpose of this study was to document the prevalence of ADHD diagnoses and ADHD symptoms in children who stutter and examine potential relationships between ADHD and stuttering characteristics.

Method: A total of 204 children between the ages of 5 and 18 years (M = 9.9 years; SD = 3.5 years) and their parents participated in the study. Parents completed the ADHD Rating Scale (ADHD-RS) indexing Inattention and Hyperactivity-Impulsivity symptoms, and children completed the age-appropriate version of the Overall Assessment of the Speaker's Experience of Stuttering assessing the adverse impact of stuttering. Chi-square proportions and Mann-Whitney U tests were used to assess differences in demographic and other variables of interest between children with and without an ADHD diagnosis. Multiple linear regression was used to assess relationships between ADHD symptoms and stuttering characteristics.

Results: Parents reported that 17.2% of children who stutter in our sample had been diagnosed with ADHD. Over 40% of children without an ADHD diagnosis had ADHD-RS scores that met the criteria for further evaluation. No significant relationship between ADHD symptoms and stuttering severity was found, but child age and inattention scores significantly, albeit modestly, predicted the adverse impact of stuttering.

Conclusions: Researchers and clinicians might be privy to a child's ADHD diagnosis, but they should recognize that many children who stutter without an ADHD diagnosis may exhibit elevated symptoms of inattention and hyperactivity-impulsivity. These symptoms can complicate both research outcomes and the treatment of stuttering.

Supplemental material: https://doi.org/10.23641/asha.28899620.

目的:研究表明,口吃者更容易出现注意力缺陷/多动障碍(ADHD)及其症状。本研究的目的是记录ADHD诊断和ADHD症状在口吃儿童中的患病率,并检查ADHD与口吃特征之间的潜在关系。方法:共204例5 ~ 18岁儿童(M = 9.9岁;SD = 3.5岁)和他们的父母参与了这项研究。家长完成ADHD评定量表(ADHD- rs),对注意力不集中和多动冲动症状进行评分,儿童完成与年龄相适应的《说话人口吃经历总体评估》,评估口吃的不良影响。卡方比例和Mann-Whitney U检验用于评估诊断为ADHD和未诊断为ADHD的儿童在人口统计学和其他感兴趣的变量方面的差异。多元线性回归用于评估ADHD症状与口吃特征之间的关系。结果:父母报告说,在我们的样本中,有17.2%的口吃儿童被诊断患有多动症。超过40%没有ADHD诊断的儿童的ADHD- rs得分符合进一步评估的标准。ADHD症状和口吃严重程度之间没有明显的关系,但儿童年龄和注意力不集中得分显著地(尽管是适度地)预测了口吃的不良影响。结论:研究人员和临床医生可能对儿童的ADHD诊断知情,但他们应该认识到,许多没有ADHD诊断的口吃儿童可能会表现出注意力不集中和多动冲动的症状。这些症状会使研究结果和口吃的治疗复杂化。补充资料:https://doi.org/10.23641/asha.28899620。
{"title":"The Significance of a Higher Prevalence of ADHD and ADHD Symptoms in Children Who Stutter.","authors":"Bridget Walsh, Seth E Tichenor, Katelyn L Gerwin","doi":"10.1044/2025_JSLHR-24-00668","DOIUrl":"10.1044/2025_JSLHR-24-00668","url":null,"abstract":"<p><strong>Purpose: </strong>Research suggests that attention-deficit/hyperactivity disorder (ADHD) and its symptoms occur more frequently in individuals who stutter. The purpose of this study was to document the prevalence of ADHD diagnoses and ADHD symptoms in children who stutter and examine potential relationships between ADHD and stuttering characteristics.</p><p><strong>Method: </strong>A total of 204 children between the ages of 5 and 18 years (<i>M</i> = 9.9 years; <i>SD</i> = 3.5 years) and their parents participated in the study. Parents completed the ADHD Rating Scale (ADHD-RS) indexing Inattention and Hyperactivity-Impulsivity symptoms, and children completed the age-appropriate version of the Overall Assessment of the Speaker's Experience of Stuttering assessing the adverse impact of stuttering. Chi-square proportions and Mann-Whitney <i>U</i> tests were used to assess differences in demographic and other variables of interest between children with and without an ADHD diagnosis. Multiple linear regression was used to assess relationships between ADHD symptoms and stuttering characteristics.</p><p><strong>Results: </strong>Parents reported that 17.2% of children who stutter in our sample had been diagnosed with ADHD. Over 40% of children without an ADHD diagnosis had ADHD-RS scores that met the criteria for further evaluation. No significant relationship between ADHD symptoms and stuttering severity was found, but child age and inattention scores significantly, albeit modestly, predicted the adverse impact of stuttering.</p><p><strong>Conclusions: </strong>Researchers and clinicians might be privy to a child's ADHD diagnosis, but they should recognize that many children who stutter without an ADHD diagnosis may exhibit elevated symptoms of inattention and hyperactivity-impulsivity. These symptoms can complicate both research outcomes and the treatment of stuttering.</p><p><strong>Supplemental material: </strong>https://doi.org/10.23641/asha.28899620.</p>","PeriodicalId":51254,"journal":{"name":"Journal of Speech Language and Hearing Research","volume":" ","pages":"2741-2758"},"PeriodicalIF":2.2,"publicationDate":"2025-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12173216/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144081722","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Journal of Speech Language and Hearing Research
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1