首页 > 最新文献

Journal of Speech Language and Hearing Research最新文献

英文 中文
Craniofacial and Velopharyngeal Dimensions in Infants 0-12 Months: Between- and Within-Group Differences Based on Age and Sex. 0-12 个月婴儿的颅面和咽部尺寸:基于年龄和性别的组间和组内差异。
IF 2.6 2区 医学 Q1 AUDIOLOGY & SPEECH-LANGUAGE PATHOLOGY Pub Date : 2024-09-11 DOI: 10.1044/2024_jslhr-24-00084
Samantha J Power,Annalisa V Piccorelli,David L Jones,Ilana Neuberger,Gregory C Allen,Krystle Barhaghi,Katelyn J Kotlarek
PURPOSEThe purpose of the present study is to (a) provide quantitative data on the growth of levator veli palatini (LVP), velopharyngeal (VP), and craniofacial dimensions in children under 12 months while controlling for corrected age and sex and (b) compare variability within age and sex groups.METHODMagnetic resonance imaging scans of 75 infants between 0 and 12 months were measured and divided into four age groups. These data were obtained as part of a larger retrospective study. Following exclusion criteria, scans were analyzed, and dependent variables were obtained.RESULTSThere was a statistically significant (p < .0001) difference between corrected age groups on LVP muscle, VP, and craniofacial variables while controlling for sex. Significant growth effects were observed for LVP length (p < .0001), extravelar length (p < .0001), intravelar length (p = .048), midline thickness (p = .0001), origin-origin distance (p < .0001), velar length (p < .0001), velar thickness (p = .003), nasion-sella turcica distance (p < .0001), sella turcica-basion distance (p < .0001), and hard palate length (p < .0001). Significant sex effects were observed for pharyngeal depth (p = .026) and effective VP ratio (p = .014). When age was treated as a continuous variable, similar results were observed for all variables except pharyngeal depth. Within-group comparisons revealed the most variability occurs between 3 and 5.99 months for LVP and craniofacial variables and between 9 and 11.99 months of age for VP variables. Male participants demonstrated greater variability than female participants.CONCLUSIONSDifferences were observed in LVP, VP, and craniofacial variables in children under 12 months while controlling for sex. Males demonstrated larger values and greater variability for most variables.
本研究的目的是:(a) 提供 12 个月以下儿童腭上提肌 (LVP)、咽发达肌 (VP) 和颅面尺寸生长的定量数据,同时控制校正年龄和性别;(b) 比较年龄组和性别组内的变异性。这些数据是一项大型回顾性研究的一部分。结果经校正的年龄组之间在 LVP 肌肉、VP 和颅面变量上存在显著的统计学差异(p < .0001),同时控制了性别。观察发现,LVP 长度(p < .0001)、VP 外长度(p < .0001)、VP 内长度(p = .048)、中线厚度(p = .0001)、原点-原点距离(p < .0001)、绒毛长度(p < .0001)、绒毛厚度(p = .003)、鼻翼-蝶窦距离(p < .0001)、蝶窦-鼻翼距离(p < .0001)和硬腭长度(p < .0001)。咽深度(p = .026)和有效 VP 比值(p = .014)具有显著的性别效应。将年龄作为连续变量处理时,除咽深度外,所有变量都观察到类似的结果。组内比较显示,低位咽部和颅面变量的最大变异发生在 3 到 5.99 个月之间,而 VP 变量的最大变异发生在 9 到 11.99 个月之间。结论在控制性别的情况下,观察到 12 个月以下儿童的 LVP、VP 和颅面变量存在差异。在大多数变量中,男性显示出更大的数值和更大的变异性。
{"title":"Craniofacial and Velopharyngeal Dimensions in Infants 0-12 Months: Between- and Within-Group Differences Based on Age and Sex.","authors":"Samantha J Power,Annalisa V Piccorelli,David L Jones,Ilana Neuberger,Gregory C Allen,Krystle Barhaghi,Katelyn J Kotlarek","doi":"10.1044/2024_jslhr-24-00084","DOIUrl":"https://doi.org/10.1044/2024_jslhr-24-00084","url":null,"abstract":"PURPOSEThe purpose of the present study is to (a) provide quantitative data on the growth of levator veli palatini (LVP), velopharyngeal (VP), and craniofacial dimensions in children under 12 months while controlling for corrected age and sex and (b) compare variability within age and sex groups.METHODMagnetic resonance imaging scans of 75 infants between 0 and 12 months were measured and divided into four age groups. These data were obtained as part of a larger retrospective study. Following exclusion criteria, scans were analyzed, and dependent variables were obtained.RESULTSThere was a statistically significant (p < .0001) difference between corrected age groups on LVP muscle, VP, and craniofacial variables while controlling for sex. Significant growth effects were observed for LVP length (p < .0001), extravelar length (p < .0001), intravelar length (p = .048), midline thickness (p = .0001), origin-origin distance (p < .0001), velar length (p < .0001), velar thickness (p = .003), nasion-sella turcica distance (p < .0001), sella turcica-basion distance (p < .0001), and hard palate length (p < .0001). Significant sex effects were observed for pharyngeal depth (p = .026) and effective VP ratio (p = .014). When age was treated as a continuous variable, similar results were observed for all variables except pharyngeal depth. Within-group comparisons revealed the most variability occurs between 3 and 5.99 months for LVP and craniofacial variables and between 9 and 11.99 months of age for VP variables. Male participants demonstrated greater variability than female participants.CONCLUSIONSDifferences were observed in LVP, VP, and craniofacial variables in children under 12 months while controlling for sex. Males demonstrated larger values and greater variability for most variables.","PeriodicalId":51254,"journal":{"name":"Journal of Speech Language and Hearing Research","volume":null,"pages":null},"PeriodicalIF":2.6,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142210701","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Introduction to the Forum: Native Language, Dialect, and Foreign Accent in Dysarthria. 论坛简介:构音障碍中的母语、方言和外国口音。
IF 2.6 2区 医学 Q1 AUDIOLOGY & SPEECH-LANGUAGE PATHOLOGY Pub Date : 2024-09-09 DOI: 10.1044/2024_jslhr-24-00522
Yunjung Kim
This timely collection is an international effort to serve as a foundation to encourage research that offers insights into the interaction between language variation and motor speech disorders. Specifically, this forum aimed to provide a platform that (a) explores and demonstrates the role of language variation in the manifestation of dysarthria, (b) considers language variation in clinical assessment and management, and (c) promotes awareness of diverse language backgrounds of people with dysarthria. The forum contains six articles, spanning a variety of research designs (cross-sectional, pre- and post-treatment), kinds of articles (tutorial, research article, commentary), and a range of languages from around the world (English, French, Korean Portuguese, Spanish).
这本及时的论文集是一项国际性的努力,旨在作为鼓励研究的基础,深入了解语言变异与运动性言语障碍之间的相互作用。具体而言,本论坛旨在提供一个平台,以便:(a) 探讨和展示语言变异在构音障碍表现中的作用;(b) 考虑临床评估和管理中的语言变异;(c) 促进对构音障碍患者不同语言背景的认识。论坛包含六篇文章,涵盖各种研究设计(横断面、治疗前和治疗后)、文章类型(教程、研究文章、评论)以及来自世界各地的各种语言(英语、法语、韩语、葡萄牙语和西班牙语)。
{"title":"Introduction to the Forum: Native Language, Dialect, and Foreign Accent in Dysarthria.","authors":"Yunjung Kim","doi":"10.1044/2024_jslhr-24-00522","DOIUrl":"https://doi.org/10.1044/2024_jslhr-24-00522","url":null,"abstract":"This timely collection is an international effort to serve as a foundation to encourage research that offers insights into the interaction between language variation and motor speech disorders. Specifically, this forum aimed to provide a platform that (a) explores and demonstrates the role of language variation in the manifestation of dysarthria, (b) considers language variation in clinical assessment and management, and (c) promotes awareness of diverse language backgrounds of people with dysarthria. The forum contains six articles, spanning a variety of research designs (cross-sectional, pre- and post-treatment), kinds of articles (tutorial, research article, commentary), and a range of languages from around the world (English, French, Korean Portuguese, Spanish).","PeriodicalId":51254,"journal":{"name":"Journal of Speech Language and Hearing Research","volume":null,"pages":null},"PeriodicalIF":2.6,"publicationDate":"2024-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142210724","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Neural Decoding of Spontaneous Overt and Intended Speech. 自发言语和有意言语的神经解码
IF 2.2 2区 医学 Q1 AUDIOLOGY & SPEECH-LANGUAGE PATHOLOGY Pub Date : 2024-08-06 DOI: 10.1044/2024_JSLHR-24-00046
Debadatta Dash, Paul Ferrari, Jun Wang

Purpose: The aim of this study was to decode intended and overt speech from neuromagnetic signals while the participants performed spontaneous overt speech tasks without cues or prompts (stimuli).

Method: Magnetoencephalography (MEG), a noninvasive neuroimaging technique, was used to collect neural signals from seven healthy adult English speakers performing spontaneous, overt speech tasks. The participants randomly spoke the words yes or no at a self-paced rate without cues. Two machine learning models, namely, linear discriminant analysis (LDA) and one-dimensional convolutional neural network (1D CNN), were employed to classify the two words from the recorded MEG signals.

Results: LDA and 1D CNN achieved average decoding accuracies of 79.02% and 90.40%, respectively, in decoding overt speech, significantly surpassing the chance level (50%). The accuracy for decoding intended speech was 67.19% using 1D CNN.

Conclusions: This study showcases the possibility of decoding spontaneous overt and intended speech directly from neural signals in the absence of perceptual interference. We believe that these findings make a steady step toward the future spontaneous speech-based brain-computer interface.

目的:本研究旨在从神经磁信号中解码意向性和公开性言语,当时参与者正在执行没有提示或提示(刺激)的自发公开言语任务:脑磁图(MEG)是一种无创神经成像技术,研究人员使用该技术收集了七名健康的成年英语使用者在执行自发、公开言语任务时发出的神经信号。参与者在没有提示的情况下以自定节奏随机说出 "是 "或 "否"。研究人员采用两种机器学习模型,即线性判别分析(LDA)和一维卷积神经网络(1D CNN),对记录的 MEG 信号中的两个单词进行分类:在解码明显语音时,LDA 和一维卷积神经网络的平均解码准确率分别为 79.02% 和 90.40%,大大超过了偶然水平(50%)。使用一维 CNN 对意图语音的解码准确率为 67.19%:本研究展示了在没有知觉干扰的情况下,直接从神经信号解码自发公开语音和意图语音的可能性。我们相信,这些发现为未来基于自发语音的脑机接口迈出了坚实的一步。
{"title":"Neural Decoding of Spontaneous Overt and Intended Speech.","authors":"Debadatta Dash, Paul Ferrari, Jun Wang","doi":"10.1044/2024_JSLHR-24-00046","DOIUrl":"https://doi.org/10.1044/2024_JSLHR-24-00046","url":null,"abstract":"<p><strong>Purpose: </strong>The aim of this study was to decode intended and overt speech from neuromagnetic signals while the participants performed spontaneous overt speech tasks without cues or prompts (stimuli).</p><p><strong>Method: </strong>Magnetoencephalography (MEG), a noninvasive neuroimaging technique, was used to collect neural signals from seven healthy adult English speakers performing spontaneous, overt speech tasks. The participants randomly spoke the words yes or no at a self-paced rate without cues. Two machine learning models, namely, linear discriminant analysis (LDA) and one-dimensional convolutional neural network (1D CNN), were employed to classify the two words from the recorded MEG signals.</p><p><strong>Results: </strong>LDA and 1D CNN achieved average decoding accuracies of 79.02% and 90.40%, respectively, in decoding overt speech, significantly surpassing the chance level (50%). The accuracy for decoding intended speech was 67.19% using 1D CNN.</p><p><strong>Conclusions: </strong>This study showcases the possibility of decoding spontaneous overt and intended speech directly from neural signals in the absence of perceptual interference. We believe that these findings make a steady step toward the future spontaneous speech-based brain-computer interface.</p>","PeriodicalId":51254,"journal":{"name":"Journal of Speech Language and Hearing Research","volume":null,"pages":null},"PeriodicalIF":2.2,"publicationDate":"2024-08-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141898935","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Automatic Speech Recognition in Primary Progressive Apraxia of Speech. 原发性进行性语言障碍的自动语音识别。
IF 2.6 2区 医学 Q1 AUDIOLOGY & SPEECH-LANGUAGE PATHOLOGY Pub Date : 2024-08-06 DOI: 10.1044/2024_jslhr-24-00049
Katerina A Tetzloff,Daniela Wiepert,Hugo Botha,Joseph R Duffy,Heather M Clark,Jennifer L Whitwell,Keith A Josephs,Rene L Utianski
INTRODUCTIONTranscribing disordered speech can be useful when diagnosing motor speech disorders such as primary progressive apraxia of speech (PPAOS), who have sound additions, deletions, and substitutions, or distortions and/or slow, segmented speech. Since transcribing speech can be a laborious process and requires an experienced listener, using automatic speech recognition (ASR) systems for diagnosis and treatment monitoring is appealing. This study evaluated the efficacy of a readily available ASR system (wav2vec 2.0) in transcribing speech of PPAOS patients to determine if the word error rate (WER) output by the ASR can differentiate between healthy speech and PPAOS and/or among its subtypes, whether WER correlates with AOS severity, and how the ASR's errors compare to those noted in manual transcriptions.METHODForty-five patients with PPAOS and 22 healthy controls were recorded repeating 13 words, 3 times each, which were transcribed manually and using wav2vec 2.0. The WER and phonetic and prosodic speech errors were compared between groups, and ASR results were compared against manual transcriptions.RESULTSMean overall WER was 0.88 for patients and 0.33 for controls. WER significantly correlated with AOS severity and accurately distinguished between patients and controls but not between AOS subtypes. The phonetic and prosodic errors from the ASR transcriptions were also unable to distinguish between subtypes, whereas errors calculated from human transcriptions were. There was poor agreement in the number of phonetic and prosodic errors between the ASR and human transcriptions.CONCLUSIONSThis study demonstrates that ASR can be useful in differentiating healthy from disordered speech and evaluating PPAOS severity but does not distinguish PPAOS subtypes. ASR transcriptions showed weak agreement with human transcriptions; thus, ASR may be a useful tool for the transcription of speech in PPAOS, but the research questions posed must be carefully considered within the context of its limitations.SUPPLEMENTAL MATERIALhttps://doi.org/10.23641/asha.26359417.
简介在诊断原发性进行性言语障碍(PPAOS)等运动性语言障碍时,转录紊乱的语音非常有用,因为这些患者会出现声音添加、删除和替换,或语音失真和/或缓慢、分段的语音。由于转录语音是一个费力的过程,而且需要经验丰富的听者,因此使用自动语音识别(ASR)系统进行诊断和治疗监测很有吸引力。本研究评估了现成的自动语音识别系统(wav2vec 2.0)在转录 PPAOS 患者语音方面的功效,以确定自动语音识别系统输出的词错误率(WER)能否区分健康语音和 PPAOS 及其亚型,WER 是否与 AOS 的严重程度相关,以及自动语音识别系统的错误与人工转录的错误相比如何。方法记录 45 名 PPAOS 患者和 22 名健康对照者重复 13 个单词,每个单词重复 3 次,并使用 wav2vec 2.0 进行人工转录。结果患者的平均总体 WER 为 0.88,对照组为 0.33。WER 与 AOS 的严重程度明显相关,能准确区分患者和对照组,但不能区分 AOS 亚型。ASR 转录的语音和拟声错误也无法区分亚型,而人工转录计算的错误则可以。结论:本研究表明,ASR 可用于区分健康和失调言语以及评估 PPAOS 的严重程度,但不能区分 PPAOS 亚型。ASR 转录与人工转录的一致性较弱;因此,ASR 可能是转录 PPAOS 患者语音的有用工具,但必须根据其局限性仔细考虑提出的研究问题。补充材料https://doi.org/10.23641/asha.26359417。
{"title":"Automatic Speech Recognition in Primary Progressive Apraxia of Speech.","authors":"Katerina A Tetzloff,Daniela Wiepert,Hugo Botha,Joseph R Duffy,Heather M Clark,Jennifer L Whitwell,Keith A Josephs,Rene L Utianski","doi":"10.1044/2024_jslhr-24-00049","DOIUrl":"https://doi.org/10.1044/2024_jslhr-24-00049","url":null,"abstract":"INTRODUCTIONTranscribing disordered speech can be useful when diagnosing motor speech disorders such as primary progressive apraxia of speech (PPAOS), who have sound additions, deletions, and substitutions, or distortions and/or slow, segmented speech. Since transcribing speech can be a laborious process and requires an experienced listener, using automatic speech recognition (ASR) systems for diagnosis and treatment monitoring is appealing. This study evaluated the efficacy of a readily available ASR system (wav2vec 2.0) in transcribing speech of PPAOS patients to determine if the word error rate (WER) output by the ASR can differentiate between healthy speech and PPAOS and/or among its subtypes, whether WER correlates with AOS severity, and how the ASR's errors compare to those noted in manual transcriptions.METHODForty-five patients with PPAOS and 22 healthy controls were recorded repeating 13 words, 3 times each, which were transcribed manually and using wav2vec 2.0. The WER and phonetic and prosodic speech errors were compared between groups, and ASR results were compared against manual transcriptions.RESULTSMean overall WER was 0.88 for patients and 0.33 for controls. WER significantly correlated with AOS severity and accurately distinguished between patients and controls but not between AOS subtypes. The phonetic and prosodic errors from the ASR transcriptions were also unable to distinguish between subtypes, whereas errors calculated from human transcriptions were. There was poor agreement in the number of phonetic and prosodic errors between the ASR and human transcriptions.CONCLUSIONSThis study demonstrates that ASR can be useful in differentiating healthy from disordered speech and evaluating PPAOS severity but does not distinguish PPAOS subtypes. ASR transcriptions showed weak agreement with human transcriptions; thus, ASR may be a useful tool for the transcription of speech in PPAOS, but the research questions posed must be carefully considered within the context of its limitations.SUPPLEMENTAL MATERIALhttps://doi.org/10.23641/asha.26359417.","PeriodicalId":51254,"journal":{"name":"Journal of Speech Language and Hearing Research","volume":null,"pages":null},"PeriodicalIF":2.6,"publicationDate":"2024-08-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142268698","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Auditory Processing of Speech and Nonspeech in People Who Stutter. 口吃患者对语音和非语音的听觉处理。
IF 2.2 2区 医学 Q1 AUDIOLOGY & SPEECH-LANGUAGE PATHOLOGY Pub Date : 2024-08-05 Epub Date: 2024-07-26 DOI: 10.1044/2024_JSLHR-24-00107
Matthew C Phillips, Emily B Myers

Purpose: We investigated speech and nonspeech auditory processing of temporal and spectral cues in people who do and do not stutter. We also asked whether self-reported stuttering severity was predicted by performance on the auditory processing measures.

Method: People who stutter (n = 23) and people who do not stutter (n = 28) completed a series of four auditory processing tasks online. These tasks consisted of speech and nonspeech stimuli differing in spectral or temporal cues. We then used independent-samples t-tests to assess differences in phonetic categorization slopes between groups and linear mixed-effects models to test differences in nonspeech auditory processing between stuttering and nonstuttering groups, and stuttering severity as a function of performance on all auditory processing tasks.

Results: We found statistically significant differences between people who do and do not stutter in phonetic categorization of a continuum differing in a temporal cue and in discrimination of nonspeech stimuli differing in a spectral cue. A significant proportion of variance in self-reported stuttering severity was predicted by performance on the auditory processing measures.

Conclusions: Taken together, these results suggest that people who stutter process both speech and nonspeech auditory information differently than people who do not stutter and may point to subtle differences in auditory processing that could contribute to stuttering. We also note that these patterns could be the consequence of listening to one's own speech, rather than the cause of production differences.

目的:我们调查了口吃者和非口吃者对时间和频谱线索的言语和非言语听觉处理。我们还询问了听觉处理测量的表现是否能预测自我报告的口吃严重程度:口吃患者(23 人)和非口吃患者(28 人)在线完成了一系列四项听觉处理任务。这些任务由不同频谱或时间线索的语音和非语音刺激组成。然后,我们使用独立样本 t 检验来评估组间语音分类斜率的差异,并使用线性混合效应模型来检验口吃组和非口吃组在非语音听觉处理方面的差异,以及口吃严重程度与所有听觉处理任务成绩的函数关系:我们发现,口吃者和非口吃者在对不同时间线索的连续音进行语音分类和对不同频谱线索的非语音刺激进行辨别方面存在着明显的统计学差异。口吃严重程度的自我报告差异有很大一部分是由听觉处理测量结果预测的:综上所述,这些结果表明,口吃患者在处理言语和非言语听觉信息时与非口吃患者有所不同,这可能表明听觉处理过程中存在微妙的差异,而这些差异可能会导致口吃。我们还注意到,这些模式可能是聆听自己讲话的结果,而不是产生差异的原因。
{"title":"Auditory Processing of Speech and Nonspeech in People Who Stutter.","authors":"Matthew C Phillips, Emily B Myers","doi":"10.1044/2024_JSLHR-24-00107","DOIUrl":"10.1044/2024_JSLHR-24-00107","url":null,"abstract":"<p><strong>Purpose: </strong>We investigated speech and nonspeech auditory processing of temporal and spectral cues in people who do and do not stutter. We also asked whether self-reported stuttering severity was predicted by performance on the auditory processing measures.</p><p><strong>Method: </strong>People who stutter (<i>n</i> = 23) and people who do not stutter (<i>n</i> = 28) completed a series of four auditory processing tasks online. These tasks consisted of speech and nonspeech stimuli differing in spectral or temporal cues. We then used independent-samples <i>t</i>-tests to assess differences in phonetic categorization slopes between groups and linear mixed-effects models to test differences in nonspeech auditory processing between stuttering and nonstuttering groups, and stuttering severity as a function of performance on all auditory processing tasks.</p><p><strong>Results: </strong>We found statistically significant differences between people who do and do not stutter in phonetic categorization of a continuum differing in a temporal cue and in discrimination of nonspeech stimuli differing in a spectral cue. A significant proportion of variance in self-reported stuttering severity was predicted by performance on the auditory processing measures.</p><p><strong>Conclusions: </strong>Taken together, these results suggest that people who stutter process both speech and nonspeech auditory information differently than people who do not stutter and may point to subtle differences in auditory processing that could contribute to stuttering. We also note that these patterns could be the consequence of listening to one's own speech, rather than the cause of production differences.</p>","PeriodicalId":51254,"journal":{"name":"Journal of Speech Language and Hearing Research","volume":null,"pages":null},"PeriodicalIF":2.2,"publicationDate":"2024-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141767934","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Influence of Sensory Monitoring on Speech Breathing Planning Processes: An Exploratory Study in Aging Speakers Reporting Dyspnea. 感觉监测对语言呼吸计划过程的影响:对报告呼吸困难的老年说话者的探索性研究
IF 2.2 2区 医学 Q1 AUDIOLOGY & SPEECH-LANGUAGE PATHOLOGY Pub Date : 2024-08-05 Epub Date: 2024-07-09 DOI: 10.1044/2024_JSLHR-23-00673
Maude Desjardins, Valérie Jomphe, Laurence Lagadec-Gaulin, Matthew Cohen, Katherine Verdolini Abbott

Purpose: Previous studies have suggested that inspirations during speech pauses are influenced by the length of adjacent utterances, owing to respiratory motor planning and physiological recovery processes. The goal of this study was to examine how attention to respiratory sensations may influence these processes in aging speakers with dyspnea, by measuring the effect of sensory monitoring on the relationship between utterance length and the occurrence of inspirations, as well as on functional voice and respiratory measures.

Method: Seventeen adults aged 50 years and older with complaints of voicing-related dyspnea completed a repeated-measures protocol consisting of a 2-week baseline phase and a 4-week sensory monitoring phase. Audiovisual recordings of semistructured speech and self-report questionnaires were collected at study onset, after the baseline phase, and after the sensory monitoring phase. Repeated-measures logistic regressions were conducted to examine changes in the relationship between utterance length and the occurrence of inspirations in adjacent pauses, and repeated-measures analyses of variance were used to investigate any changes in functional voice and respiratory measures.

Results: Planning and recovery processes appeared to remain constant across the baseline phase. From postbaseline to postsensory monitoring timepoints, a strengthening of the relationship between the presence of an inspiration during a speech pause and the length of the subsequent-but not preceding-utterance was noted. Significant improvements were noted in voice-related handicap from study onset to postsensory monitoring, but no changes were reported in respiratory comfort during speech.

Conclusions: Results suggest that respiratory planning processes, that is, the ability to plan breath intakes based on the length of upcoming utterances, may be modifiable behaviorally through targeted sensory monitoring. Further studies are warranted to validate the proposed role of respiratory sensation awareness in achieving skilled temporal coordination between voicing and breathing.

目的:以前的研究表明,由于呼吸运动规划和生理恢复过程的影响,说话停顿期间的吸气会受到相邻语篇长度的影响。本研究的目的是通过测量感觉监测对语句长度和吸气发生之间的关系以及对功能性语音和呼吸测量的影响,研究对呼吸感觉的关注如何影响有呼吸困难的老年说话者的这些过程:17名年龄在50岁及以上、主诉嗓音相关呼吸困难的成年人完成了一项重复测量方案,该方案包括为期2周的基线阶段和为期4周的感觉监测阶段。在研究开始时、基线阶段结束后和感觉监测阶段结束后,分别收集了半结构化语音的视听记录和自我报告问卷。研究人员进行了重复测量逻辑回归,以检验语篇长度与相邻停顿中吸气发生率之间关系的变化,并使用重复测量方差分析来研究功能性语音和呼吸测量的任何变化:结果:在基线阶段,计划和恢复过程似乎保持不变。从基线后时间点到感觉监测后时间点,在说话停顿期间出现的吸气与随后(而非之前)的口吃长度之间的关系得到了加强。从研究开始到感官监测后,与嗓音相关的障碍明显改善,但说话时的呼吸舒适度没有变化:研究结果表明,呼吸计划过程,即根据即将说出的话语长度计划吸气的能力,可以通过有针对性的感官监测进行行为矫正。还需要进一步的研究来验证呼吸感觉意识在实现发声和呼吸之间熟练的时间协调方面所起的作用。
{"title":"Influence of Sensory Monitoring on Speech Breathing Planning Processes: An Exploratory Study in Aging Speakers Reporting Dyspnea.","authors":"Maude Desjardins, Valérie Jomphe, Laurence Lagadec-Gaulin, Matthew Cohen, Katherine Verdolini Abbott","doi":"10.1044/2024_JSLHR-23-00673","DOIUrl":"10.1044/2024_JSLHR-23-00673","url":null,"abstract":"<p><strong>Purpose: </strong>Previous studies have suggested that inspirations during speech pauses are influenced by the length of adjacent utterances, owing to respiratory motor planning and physiological recovery processes. The goal of this study was to examine how attention to respiratory sensations may influence these processes in aging speakers with dyspnea, by measuring the effect of sensory monitoring on the relationship between utterance length and the occurrence of inspirations, as well as on functional voice and respiratory measures.</p><p><strong>Method: </strong>Seventeen adults aged 50 years and older with complaints of voicing-related dyspnea completed a repeated-measures protocol consisting of a 2-week baseline phase and a 4-week sensory monitoring phase. Audiovisual recordings of semistructured speech and self-report questionnaires were collected at study onset, after the baseline phase, and after the sensory monitoring phase. Repeated-measures logistic regressions were conducted to examine changes in the relationship between utterance length and the occurrence of inspirations in adjacent pauses, and repeated-measures analyses of variance were used to investigate any changes in functional voice and respiratory measures.</p><p><strong>Results: </strong>Planning and recovery processes appeared to remain constant across the baseline phase. From postbaseline to postsensory monitoring timepoints, a strengthening of the relationship between the presence of an inspiration during a speech pause and the length of the subsequent-but not preceding-utterance was noted. Significant improvements were noted in voice-related handicap from study onset to postsensory monitoring, but no changes were reported in respiratory comfort during speech.</p><p><strong>Conclusions: </strong>Results suggest that respiratory planning processes, that is, the ability to plan breath intakes based on the length of upcoming utterances, may be modifiable behaviorally through targeted sensory monitoring. Further studies are warranted to validate the proposed role of respiratory sensation awareness in achieving skilled temporal coordination between voicing and breathing.</p>","PeriodicalId":51254,"journal":{"name":"Journal of Speech Language and Hearing Research","volume":null,"pages":null},"PeriodicalIF":2.2,"publicationDate":"2024-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11305610/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141565064","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Speak Up: How Hearing Loss and the Lack of Hearing Aids Affect Conversations in Quiet. 大声说出来听力损失和助听器的缺乏如何影响安静的对话。
IF 2.2 2区 医学 Q1 AUDIOLOGY & SPEECH-LANGUAGE PATHOLOGY Pub Date : 2024-08-05 Epub Date: 2024-07-12 DOI: 10.1044/2024_JSLHR-23-00667
Eline Borch Petersen, Daniel Parker

Purpose: The study examines the effect of hearing loss and hearing aid (HA) amplification on the conversational dynamics between hearing-impaired (HI) and normal-hearing (NH) interlocutors. Combining data from the current and a prior study, we explore how the speech levels of both interlocutors correlate and relate to HI interlocutors' degree of hearing loss.

Method: Sixteen pairs of younger NH and elderly HI interlocutors conversed in quiet, with the HI interlocutor either unaided or wearing HAs. We analyzed the effect of hearing status and HA amplification on the conversational dynamics, including turn-taking times (floor-transfer offsets), utterance lengths, and speech levels. Furthermore, we conducted an in-depth analysis of the speech levels using combined data sets from the current and previously published data by Petersen, MacDonald, and Sørensen (2022).

Results: Unaided HI interlocutors were slower and more variable at timing their turns, but wearing HAs reduced the differences between the HI and NH interlocutors. Conversations were less interactive, and pairs were slower at solving the conversational tasks when the HI interlocutor was unaided. Both interlocutors spoke louder when the HI interlocutor was unaided. The speech level of the NH interlocutors was related to that of the HI interlocutors, with the HI speech levels also correlating with their own degree of hearing loss.

Conclusions: Despite typically being unchallenging for HI individuals, one-on-one conversations in quiet were impacted by the HI interlocutor not wearing HAs. Additionally, combining data sets revealed that NH interlocutors adjusted their speech level to match that of HI interlocutors.

目的:本研究探讨了听力损失和助听器(HA)放大对听力受损(HI)和听力正常(NH)对话者之间对话动态的影响。结合当前研究和之前研究的数据,我们探讨了对话双方的言语水平如何与听障对话者的听力损失程度相关联:16 对年轻的 NH 和年长的 HI 对话者在安静的环境中进行对话,HI 对话者可以不戴助听器,也可以戴助听器。我们分析了听力状况和助听器扩音对对话动态的影响,包括轮流时间(底线-转移偏移)、语篇长度和语音水平。此外,我们还利用 Petersen、MacDonald 和 Sørensen(2022 年)目前公布的数据集和之前公布的数据集对语音水平进行了深入分析:结果:没有辅助的 HI 对话者在轮流计时时速度更慢、变化更大,但佩戴助听器缩小了 HI 和 NH 对话者之间的差异。在没有辅助的情况下,HI 对话者的对话互动性较差,两人在完成对话任务时速度较慢。在没有辅助的情况下,HI 对话者和 NH 对话者的说话声音都比较大。NH 对话者的语音水平与 HI 对话者的语音水平相关,而 HI 对话者的语音水平也与其自身的听力损失程度相关:结论:尽管对听力损失者来说一对一的安静对话通常没有挑战性,但听力损失对话者不佩戴助听器也会影响对话效果。此外,合并数据集后发现,NH 对话者会调整自己的语音水平,以符合 HI 对话者的语音水平。
{"title":"Speak Up: How Hearing Loss and the Lack of Hearing Aids Affect Conversations in Quiet.","authors":"Eline Borch Petersen, Daniel Parker","doi":"10.1044/2024_JSLHR-23-00667","DOIUrl":"10.1044/2024_JSLHR-23-00667","url":null,"abstract":"<p><strong>Purpose: </strong>The study examines the effect of hearing loss and hearing aid (HA) amplification on the conversational dynamics between hearing-impaired (HI) and normal-hearing (NH) interlocutors. Combining data from the current and a prior study, we explore how the speech levels of both interlocutors correlate and relate to HI interlocutors' degree of hearing loss.</p><p><strong>Method: </strong>Sixteen pairs of younger NH and elderly HI interlocutors conversed in quiet, with the HI interlocutor either unaided or wearing HAs. We analyzed the effect of hearing status and HA amplification on the conversational dynamics, including turn-taking times (floor-transfer offsets), utterance lengths, and speech levels. Furthermore, we conducted an in-depth analysis of the speech levels using combined data sets from the current and previously published data by Petersen, MacDonald, and Sørensen (2022).</p><p><strong>Results: </strong>Unaided HI interlocutors were slower and more variable at timing their turns, but wearing HAs reduced the differences between the HI and NH interlocutors. Conversations were less interactive, and pairs were slower at solving the conversational tasks when the HI interlocutor was unaided. Both interlocutors spoke louder when the HI interlocutor was unaided. The speech level of the NH interlocutors was related to that of the HI interlocutors, with the HI speech levels also correlating with their own degree of hearing loss.</p><p><strong>Conclusions: </strong>Despite typically being unchallenging for HI individuals, one-on-one conversations in quiet were impacted by the HI interlocutor not wearing HAs. Additionally, combining data sets revealed that NH interlocutors adjusted their speech level to match that of HI interlocutors.</p>","PeriodicalId":51254,"journal":{"name":"Journal of Speech Language and Hearing Research","volume":null,"pages":null},"PeriodicalIF":2.2,"publicationDate":"2024-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141602149","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Introducing the Intra-Individual Variability Hypothesis in Explaining Individual Differences in Language Development. 在解释语言发展的个体差异时引入个体内变异假说。
IF 2.2 2区 医学 Q1 AUDIOLOGY & SPEECH-LANGUAGE PATHOLOGY Pub Date : 2024-08-05 Epub Date: 2024-06-24 DOI: 10.1044/2024_JSLHR-23-00527
Anna Kautto, Henry Railo, Elina Mainela-Arnold

Purpose: Response times (RTs) are commonly used in studying language acquisition. However, previous research utilizing RT in the context of language has largely overlooked the intra-individual variability (IIV) of RTs, which could hold significant information about the processes underlying language acquisition.

Method: We explored the association between language abilities and RT variability in visuomotor tasks using two data sets from previously published studies. The participants were 7- to 10-year-old children (n = 77).

Results: Our results suggest that increased variability in RTs is associated with weaker language abilities. Specifically, this within-participant variability in visuomotor RTs, especially the proportion of unusually slow responses, predicted language abilities better than mean RTs, a factor often linked to language skills in past research.

Conclusions: Based on our findings, we introduce the IIV hypothesis in explaining individual differences in language development. According to our hypothesis, inconsistency in the timing of cognitive processes, reflected by increased IIV in RTs, degrades learning different aspects of language, and results in individual differences in language abilities. Future studies should further examine the relationship between IIV and language abilities, and test the extent to which the possible relationship is causal.

目的:反应时(RT)通常用于研究语言习得。然而,以往利用语言背景下的反应时间进行的研究在很大程度上忽视了反应时间的个体内变异性(IIV),而这种变异性可能蕴含着有关语言习得过程的重要信息:我们利用以前发表的研究中的两组数据,探讨了视觉运动任务中语言能力与RT变异性之间的关联。研究对象为 7 至 10 岁的儿童(n = 77):结果:我们的研究结果表明,RT 变异性的增加与语言能力较弱有关。具体来说,视觉运动RT的参与者内部变异性,尤其是异常缓慢反应的比例,比平均RT更能预测语言能力,而平均RT是过去研究中经常与语言能力联系在一起的因素:根据我们的研究结果,我们提出了 IIV 假说来解释语言发展的个体差异。根据我们的假设,认知过程时间的不一致性(反映在 RTs 中的 IIV 增加)会降低语言学习的不同方面,并导致语言能力的个体差异。今后的研究应进一步探讨 IIV 与语言能力之间的关系,并检验这种可能的关系在多大程度上是因果关系。
{"title":"Introducing the Intra-Individual Variability Hypothesis in Explaining Individual Differences in Language Development.","authors":"Anna Kautto, Henry Railo, Elina Mainela-Arnold","doi":"10.1044/2024_JSLHR-23-00527","DOIUrl":"10.1044/2024_JSLHR-23-00527","url":null,"abstract":"<p><strong>Purpose: </strong>Response times (RTs) are commonly used in studying language acquisition. However, previous research utilizing RT in the context of language has largely overlooked the intra-individual variability (IIV) of RTs, which could hold significant information about the processes underlying language acquisition.</p><p><strong>Method: </strong>We explored the association between language abilities and RT variability in visuomotor tasks using two data sets from previously published studies. The participants were 7- to 10-year-old children (<i>n</i> = 77).</p><p><strong>Results: </strong>Our results suggest that increased variability in RTs is associated with weaker language abilities. Specifically, this within-participant variability in visuomotor RTs, especially the proportion of unusually slow responses, predicted language abilities better than mean RTs, a factor often linked to language skills in past research.</p><p><strong>Conclusions: </strong>Based on our findings, we introduce the IIV hypothesis in explaining individual differences in language development. According to our hypothesis, inconsistency in the timing of cognitive processes, reflected by increased IIV in RTs, degrades learning different aspects of language, and results in individual differences in language abilities. Future studies should further examine the relationship between IIV and language abilities, and test the extent to which the possible relationship is causal.</p>","PeriodicalId":51254,"journal":{"name":"Journal of Speech Language and Hearing Research","volume":null,"pages":null},"PeriodicalIF":2.2,"publicationDate":"2024-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141447565","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Morphological and Inhibitory Skills in Monolingual and Bilingual Children With and Without Developmental Language Disorder. 有和没有发育性语言障碍的单语和双语儿童的语法和抑制能力。
IF 2.2 2区 医学 Q1 AUDIOLOGY & SPEECH-LANGUAGE PATHOLOGY Pub Date : 2024-08-05 Epub Date: 2024-07-26 DOI: 10.1044/2024_JSLHR-23-00368
Elena Gandolfi, Giovanna Diotallevi, Paola Viterbori

Purpose: This study examined the language and nonverbal inhibitory control skills of Italian monolingual and bilingual typically developing (TD) preschoolers with Italian as their second language and of age-matched monolingual and bilingual peers with developmental language disorder (DLD).

Method: Four groups of preschoolers were enrolled: 30 TD Italian monolinguals, 24 TD bilinguals, 19 Italian monolinguals with DLD, and 19 bilinguals with DLD. All children were assessed in Italian on vocabulary, receptive morphosyntax, and morphological markers for DLD in the Italian language (i.e., third-person verb inflections, definite articles, third-person direct-object clitic pronouns, simple prepositions) and nonverbal inhibitory control skills. Group performance was compared using a series of one-way analyses of variance.

Results: Monolingual and bilingual children with DLD achieved significantly lower performance in all language measures compared to both TD monolingual and bilingual children. However, TD bilinguals, although comprehensively showing better language skills than monolinguals with DLD, achieved a performance closer to that of monolinguals with DLD but significantly higher than that of bilinguals with DLD. Both TD monolinguals and bilinguals showed better results than both DLD groups in inhibitory control tasks, particularly in the interference suppression task.

Conclusions: This study provides a picture of language and inhibitory control characteristics of children with various language profiles and adds to the literature on potential markers of DLD among bilingual children. These results suggest that the assessment of nonlinguistic markers, which are associated with language impairment, could be a useful approach to better specify the diagnosis of DLD and reduce cases of misdiagnosis in the context of bilingualism.

目的:本研究考察了以意大利语为第二语言的意大利语单语和双语典型发育(TD)学龄前儿童,以及与他们年龄相匹配的患有发育性语言障碍(DLD)的单语和双语同伴的语言和非言语抑制控制技能:方法:共招募了四组学龄前儿童:方法:共招募了四组学龄前儿童:30 名意大利语单语发育障碍儿童、24 名意大利语双语发育障碍儿童、19 名意大利语单语发育障碍儿童和 19 名意大利语双语发育障碍儿童。所有儿童都接受了意大利语词汇、接受性语法、意大利语中 DLD 的形态标记(即第三人称动词转折、定冠词、第三人称直接宾语从句代词、简单介词)和非语言抑制控制技能的评估。通过一系列单因素方差分析对各组成绩进行了比较:与TD单语和双语儿童相比,DLD单语和双语儿童在所有语言测试中的成绩都明显较低。然而,尽管TD双语儿童的语言技能全面优于DLD单语儿童,但他们的成绩更接近于DLD单语儿童的成绩,但明显高于DLD双语儿童的成绩。在抑制控制任务中,尤其是在干扰抑制任务中,TD 单语和双语组的成绩均优于 DLD 组:本研究提供了具有不同语言特征的儿童的语言和抑制控制特征,并为有关双语儿童中 DLD 潜在标志物的文献提供了补充。这些结果表明,对与语言障碍相关的非语言标记进行评估是一种有用的方法,可以更好地明确 DLD 的诊断,减少双语情况下的误诊。
{"title":"Morphological and Inhibitory Skills in Monolingual and Bilingual Children With and Without Developmental Language Disorder.","authors":"Elena Gandolfi, Giovanna Diotallevi, Paola Viterbori","doi":"10.1044/2024_JSLHR-23-00368","DOIUrl":"10.1044/2024_JSLHR-23-00368","url":null,"abstract":"<p><strong>Purpose: </strong>This study examined the language and nonverbal inhibitory control skills of Italian monolingual and bilingual typically developing (TD) preschoolers with Italian as their second language and of age-matched monolingual and bilingual peers with developmental language disorder (DLD).</p><p><strong>Method: </strong>Four groups of preschoolers were enrolled: 30 TD Italian monolinguals, 24 TD bilinguals, 19 Italian monolinguals with DLD, and 19 bilinguals with DLD. All children were assessed in Italian on vocabulary, receptive morphosyntax, and morphological markers for DLD in the Italian language (i.e., third-person verb inflections, definite articles, third-person direct-object clitic pronouns, simple prepositions) and nonverbal inhibitory control skills. Group performance was compared using a series of one-way analyses of variance.</p><p><strong>Results: </strong>Monolingual and bilingual children with DLD achieved significantly lower performance in all language measures compared to both TD monolingual and bilingual children. However, TD bilinguals, although comprehensively showing better language skills than monolinguals with DLD, achieved a performance closer to that of monolinguals with DLD but significantly higher than that of bilinguals with DLD. Both TD monolinguals and bilinguals showed better results than both DLD groups in inhibitory control tasks, particularly in the interference suppression task.</p><p><strong>Conclusions: </strong>This study provides a picture of language and inhibitory control characteristics of children with various language profiles and adds to the literature on potential markers of DLD among bilingual children. These results suggest that the assessment of nonlinguistic markers, which are associated with language impairment, could be a useful approach to better specify the diagnosis of DLD and reduce cases of misdiagnosis in the context of bilingualism.</p>","PeriodicalId":51254,"journal":{"name":"Journal of Speech Language and Hearing Research","volume":null,"pages":null},"PeriodicalIF":2.2,"publicationDate":"2024-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141768003","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Erratum to "Development and Validation of the Bilingual Catalan/Spanish Cross-Cultural Adaptation of the Consensus Auditory-Perceptual Evaluation of Voice". 加泰罗尼亚语/西班牙语双语嗓音听觉感知评估共识跨文化改编的开发与验证》勘误。
IF 2.2 2区 医学 Q1 AUDIOLOGY & SPEECH-LANGUAGE PATHOLOGY Pub Date : 2024-08-05 Epub Date: 2024-07-05 DOI: 10.1044/2024_JSLHR-24-00368
{"title":"Erratum to \"Development and Validation of the Bilingual Catalan/Spanish Cross-Cultural Adaptation of the Consensus Auditory-Perceptual Evaluation of Voice\".","authors":"","doi":"10.1044/2024_JSLHR-24-00368","DOIUrl":"10.1044/2024_JSLHR-24-00368","url":null,"abstract":"","PeriodicalId":51254,"journal":{"name":"Journal of Speech Language and Hearing Research","volume":null,"pages":null},"PeriodicalIF":2.2,"publicationDate":"2024-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141538910","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Journal of Speech Language and Hearing Research
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1