{"title":"Auditory Processing of Speech and Nonspeech in People Who Stutter.","authors":"Matthew C Phillips, Emily B Myers","doi":"10.1044/2024_JSLHR-24-00107","DOIUrl":null,"url":null,"abstract":"<p><strong>Purpose: </strong>We investigated speech and nonspeech auditory processing of temporal and spectral cues in people who do and do not stutter. We also asked whether self-reported stuttering severity was predicted by performance on the auditory processing measures.</p><p><strong>Method: </strong>People who stutter (<i>n</i> = 23) and people who do not stutter (<i>n</i> = 28) completed a series of four auditory processing tasks online. These tasks consisted of speech and nonspeech stimuli differing in spectral or temporal cues. We then used independent-samples <i>t</i>-tests to assess differences in phonetic categorization slopes between groups and linear mixed-effects models to test differences in nonspeech auditory processing between stuttering and nonstuttering groups, and stuttering severity as a function of performance on all auditory processing tasks.</p><p><strong>Results: </strong>We found statistically significant differences between people who do and do not stutter in phonetic categorization of a continuum differing in a temporal cue and in discrimination of nonspeech stimuli differing in a spectral cue. A significant proportion of variance in self-reported stuttering severity was predicted by performance on the auditory processing measures.</p><p><strong>Conclusions: </strong>Taken together, these results suggest that people who stutter process both speech and nonspeech auditory information differently than people who do not stutter and may point to subtle differences in auditory processing that could contribute to stuttering. We also note that these patterns could be the consequence of listening to one's own speech, rather than the cause of production differences.</p>","PeriodicalId":51254,"journal":{"name":"Journal of Speech Language and Hearing Research","volume":null,"pages":null},"PeriodicalIF":2.2000,"publicationDate":"2024-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Speech Language and Hearing Research","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1044/2024_JSLHR-24-00107","RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2024/7/26 0:00:00","PubModel":"Epub","JCR":"Q1","JCRName":"AUDIOLOGY & SPEECH-LANGUAGE PATHOLOGY","Score":null,"Total":0}
引用次数: 0
Abstract
Purpose: We investigated speech and nonspeech auditory processing of temporal and spectral cues in people who do and do not stutter. We also asked whether self-reported stuttering severity was predicted by performance on the auditory processing measures.
Method: People who stutter (n = 23) and people who do not stutter (n = 28) completed a series of four auditory processing tasks online. These tasks consisted of speech and nonspeech stimuli differing in spectral or temporal cues. We then used independent-samples t-tests to assess differences in phonetic categorization slopes between groups and linear mixed-effects models to test differences in nonspeech auditory processing between stuttering and nonstuttering groups, and stuttering severity as a function of performance on all auditory processing tasks.
Results: We found statistically significant differences between people who do and do not stutter in phonetic categorization of a continuum differing in a temporal cue and in discrimination of nonspeech stimuli differing in a spectral cue. A significant proportion of variance in self-reported stuttering severity was predicted by performance on the auditory processing measures.
Conclusions: Taken together, these results suggest that people who stutter process both speech and nonspeech auditory information differently than people who do not stutter and may point to subtle differences in auditory processing that could contribute to stuttering. We also note that these patterns could be the consequence of listening to one's own speech, rather than the cause of production differences.
目的:我们调查了口吃者和非口吃者对时间和频谱线索的言语和非言语听觉处理。我们还询问了听觉处理测量的表现是否能预测自我报告的口吃严重程度:口吃患者(23 人)和非口吃患者(28 人)在线完成了一系列四项听觉处理任务。这些任务由不同频谱或时间线索的语音和非语音刺激组成。然后,我们使用独立样本 t 检验来评估组间语音分类斜率的差异,并使用线性混合效应模型来检验口吃组和非口吃组在非语音听觉处理方面的差异,以及口吃严重程度与所有听觉处理任务成绩的函数关系:我们发现,口吃者和非口吃者在对不同时间线索的连续音进行语音分类和对不同频谱线索的非语音刺激进行辨别方面存在着明显的统计学差异。口吃严重程度的自我报告差异有很大一部分是由听觉处理测量结果预测的:综上所述,这些结果表明,口吃患者在处理言语和非言语听觉信息时与非口吃患者有所不同,这可能表明听觉处理过程中存在微妙的差异,而这些差异可能会导致口吃。我们还注意到,这些模式可能是聆听自己讲话的结果,而不是产生差异的原因。
期刊介绍:
Mission: JSLHR publishes peer-reviewed research and other scholarly articles on the normal and disordered processes in speech, language, hearing, and related areas such as cognition, oral-motor function, and swallowing. The journal is an international outlet for both basic research on communication processes and clinical research pertaining to screening, diagnosis, and management of communication disorders as well as the etiologies and characteristics of these disorders. JSLHR seeks to advance evidence-based practice by disseminating the results of new studies as well as providing a forum for critical reviews and meta-analyses of previously published work.
Scope: The broad field of communication sciences and disorders, including speech production and perception; anatomy and physiology of speech and voice; genetics, biomechanics, and other basic sciences pertaining to human communication; mastication and swallowing; speech disorders; voice disorders; development of speech, language, or hearing in children; normal language processes; language disorders; disorders of hearing and balance; psychoacoustics; and anatomy and physiology of hearing.