Pub Date : 2024-10-24Epub Date: 2024-07-03DOI: 10.1044/2024_JSLHR-23-00125
Alan A Wrench
Purpose: Tongue anatomy and function is widely described as consisting of four extrinsic muscles to control position and four intrinsic muscles to control shape. This myoarchitecture cannot, however, explain independent tongue body and blade movement nor accurately model the subtlety of observed lingual shapes. This study presents the case for a finer neuromuscular structure and functional description.
Method: Using the theoretical framework of the partitioning hypothesis, evidence for neuromuscular compartments of each of the lingual muscles was discerned by reviewing studies of lingual anatomy, hypoglossal nerve staining, hypoglossal motoneuron axon tracing, muscle fiber type distribution, and electromyography. Muscle fibers of the visible human female were manually traced to produce a three-dimensional atlas of muscular compartments. A kinematic study was undertaken to determine the degree of independent movement between different parts of the tongue. A simple biomechanical model was used to demonstrate how synergistic groups of compartments can control sectors of the tongue.
Results: Results indicated as many as 10 compartments of genioglossus, two each of superior and inferior longitudinal, eight of styloglossus, three of hyoglossus, and six each of transversus and verticalis, while palatoglossus may not have a significant role in tongue function. Kinematic analysis indicated independent control of five sectors of the tongue body, and biomechanical modeling demonstrated how this control may be achieved.
Conclusion: Evidence is presented for a lingual structure based on neuromuscular compartments, which work together to position and shape sectors of the tongue and independently control tongue body and blade.
{"title":"The Compartmental Tongue.","authors":"Alan A Wrench","doi":"10.1044/2024_JSLHR-23-00125","DOIUrl":"10.1044/2024_JSLHR-23-00125","url":null,"abstract":"<p><strong>Purpose: </strong>Tongue anatomy and function is widely described as consisting of four extrinsic muscles to control position and four intrinsic muscles to control shape. This myoarchitecture cannot, however, explain independent tongue body and blade movement nor accurately model the subtlety of observed lingual shapes. This study presents the case for a finer neuromuscular structure and functional description.</p><p><strong>Method: </strong>Using the theoretical framework of the partitioning hypothesis, evidence for neuromuscular compartments of each of the lingual muscles was discerned by reviewing studies of lingual anatomy, hypoglossal nerve staining, hypoglossal motoneuron axon tracing, muscle fiber type distribution, and electromyography. Muscle fibers of the visible human female were manually traced to produce a three-dimensional atlas of muscular compartments. A kinematic study was undertaken to determine the degree of independent movement between different parts of the tongue. A simple biomechanical model was used to demonstrate how synergistic groups of compartments can control sectors of the tongue.</p><p><strong>Results: </strong>Results indicated as many as 10 compartments of genioglossus, two each of superior and inferior longitudinal, eight of styloglossus, three of hyoglossus, and six each of transversus and verticalis, while palatoglossus may not have a significant role in tongue function. Kinematic analysis indicated independent control of five sectors of the tongue body, and biomechanical modeling demonstrated how this control may be achieved.</p><p><strong>Conclusion: </strong>Evidence is presented for a lingual structure based on neuromuscular compartments, which work together to position and shape sectors of the tongue and independently control tongue body and blade.</p>","PeriodicalId":51254,"journal":{"name":"Journal of Speech Language and Hearing Research","volume":" ","pages":"3887-3913"},"PeriodicalIF":2.2,"publicationDate":"2024-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141499627","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-24Epub Date: 2023-07-26DOI: 10.1044/2023_JSLHR-23-00076
Esther Janse, Chen Shen, Esther de Kerf
Purpose: In a previous publication, we observed that maximum speech performance in a nonclinical sample of young adult speakers producing alternating diadochokinesis (DDK) sequences (e.g., rapidly repeating "pataka") was associated with cognitive control: Those with better cognitive switching abilities (i.e., switching flexibly between tasks or mental sets) showed higher DDK accuracy. To follow up on these results, we investigated whether this previously observed association is specific to the rapid production of alternating sequences or also holds for non-alternating sequences (e.g., "tatata").
Method: For the same sample of 78 young adults as in our previous study, we additionally analyzed their accuracy and rate performance on non-alternating sequences to investigate whether executive control abilities (i.e., indices of speakers' updating, inhibition, and switching abilities) were more strongly associated with production of alternating, as compared with non-alternating, sequences.
Results: Of the three executive control abilities, only switching predicted both DDK accuracy and rate. The association between cognitive switching (and updating ability) and DDK accuracy was only observed for alternating sequences. The DDK rate model included a simple effect of cognitive switching, such that those with better switching ability showed slower diadochokinetic rates across the board. Thus, those with better cognitive ability showed more accurate (alternating) diadochokinetic production and slower maximum rates for both alternating and non-alternating sequences.
Conclusion: These combined results suggest that those with better executive control have better control over their maximum speech performance and show that the link between cognitive control and maximum speech performance also holds for non-alternating sequences.
{"title":"Diadochokinesis Performance and Its Link to Cognitive Control: Alternating Versus Non-Alternating Diadochokinesis.","authors":"Esther Janse, Chen Shen, Esther de Kerf","doi":"10.1044/2023_JSLHR-23-00076","DOIUrl":"10.1044/2023_JSLHR-23-00076","url":null,"abstract":"<p><strong>Purpose: </strong>In a previous publication, we observed that maximum speech performance in a nonclinical sample of young adult speakers producing <i>alternating</i> diadochokinesis (DDK) sequences (e.g., rapidly repeating \"pataka\") was associated with cognitive control: Those with better cognitive switching abilities (i.e., switching flexibly between tasks or mental sets) showed higher DDK accuracy. To follow up on these results, we investigated whether this previously observed association is specific to the rapid production of <i>alternating</i> sequences or also holds for <i>non-alternating</i> sequences (e.g., \"tatata\").</p><p><strong>Method: </strong>For the same sample of 78 young adults as in our previous study, we additionally analyzed their accuracy and rate performance on non-alternating sequences to investigate whether executive control abilities (i.e., indices of speakers' updating, inhibition, and switching abilities) were more strongly associated with production of alternating, as compared with non-alternating, sequences.</p><p><strong>Results: </strong>Of the three executive control abilities, only switching predicted both DDK accuracy and rate. The association between cognitive switching (and updating ability) and DDK <i>accuracy</i> was only observed for alternating sequences. The DDK <i>rate</i> model included a simple effect of cognitive switching, such that those with better switching ability showed slower diadochokinetic rates across the board. Thus, those with better cognitive ability showed more accurate (alternating) diadochokinetic production and slower maximum rates for both alternating and non-alternating sequences.</p><p><strong>Conclusion: </strong>These combined results suggest that those with better executive control have better control over their maximum speech performance and show that the link between cognitive control and maximum speech performance also holds for non-alternating sequences.</p>","PeriodicalId":51254,"journal":{"name":"Journal of Speech Language and Hearing Research","volume":" ","pages":"4096-4106"},"PeriodicalIF":2.2,"publicationDate":"2024-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9930587","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-24Epub Date: 2023-09-06DOI: 10.1044/2023_JSLHR-23-00070
Aravind K Namasivayam, Hyunji Shin, Rosane Nisenbaum, Margit Pukonen, Pascal van Lieshout
Purpose: The purpose of the study was to investigate child- and intervention-level factors that predict improvements in functional communication outcomes in children with motor-based speech sound disorders.
Method: Eighty-five preschool-age children with childhood apraxia of speech (n = 37) and speech motor delay (n = 48) participated. Multivariable logistic regression models estimated odds ratios and 95% confidence intervals for the association between minimal clinically important difference in the Focus on the Outcomes of Communication Under Six scores and multiple child-level (e.g., age, sex, speech intelligibility, Kaufman Speech Praxis Test diagnostic rating scale) and intervention-level predictors (dose frequency and home practice duration).
Results: Overall, 65% of participants demonstrated minimal clinically important difference changes in the functional communication outcomes. Kaufman Speech Praxis Test rating scale was significantly associated with higher odds of noticeable change in functional communication outcomes in children. There is some evidence that delivering the intervention for 2 times per week for 10 weeks provides benefit.
Conclusion: A rating scale based on task complexity has the potential for serving as a screening tool to triage children for intervention from waitlist and/or determining service delivery for this population.
{"title":"Predictors of Functional Communication Outcomes in Children With Idiopathic Motor Speech Disorders.","authors":"Aravind K Namasivayam, Hyunji Shin, Rosane Nisenbaum, Margit Pukonen, Pascal van Lieshout","doi":"10.1044/2023_JSLHR-23-00070","DOIUrl":"10.1044/2023_JSLHR-23-00070","url":null,"abstract":"<p><strong>Purpose: </strong>The purpose of the study was to investigate child- and intervention-level factors that predict improvements in functional communication outcomes in children with motor-based speech sound disorders.</p><p><strong>Method: </strong>Eighty-five preschool-age children with childhood apraxia of speech (<i>n</i> = 37) and speech motor delay (<i>n</i> = 48) participated. Multivariable logistic regression models estimated odds ratios and 95% confidence intervals for the association between minimal clinically important difference in the Focus on the Outcomes of Communication Under Six scores and multiple child-level (e.g., age, sex, speech intelligibility, Kaufman Speech Praxis Test diagnostic rating scale) and intervention-level predictors (dose frequency and home practice duration).</p><p><strong>Results: </strong>Overall, 65% of participants demonstrated minimal clinically important difference changes in the functional communication outcomes. Kaufman Speech Praxis Test rating scale was significantly associated with higher odds of noticeable change in functional communication outcomes in children. There is some evidence that delivering the intervention for 2 times per week for 10 weeks provides benefit.</p><p><strong>Conclusion: </strong>A rating scale based on task complexity has the potential for serving as a screening tool to triage children for intervention from waitlist and/or determining service delivery for this population.</p>","PeriodicalId":51254,"journal":{"name":"Journal of Speech Language and Hearing Research","volume":" ","pages":"4053-4068"},"PeriodicalIF":2.2,"publicationDate":"2024-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10541282","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-24DOI: 10.1044/2024_JSLHR-24-00354
Kaila L Stipancic, Frits van Brenk, Mengyang Qiu, Kris Tjaden
Purpose: The purpose of the current study was to estimate the minimal clinically important difference (MCID) of sentence intelligibility in control speakers and in speakers with dysarthria due to multiple sclerosis (MS) and Parkinson's disease (PD).
Method: Sixteen control speakers, 16 speakers with MS, and 16 speakers with PD were audio-recorded reading aloud sentences in habitual, clear, fast, loud, and slow speaking conditions. Two hundred forty nonexpert crowdsourced listeners heard paired conditions of the same sentence content from a speaker and indicated if one condition was more understandable than another. Listeners then used the Global Ratings of Change (GROC) Scale to indicate how much more understandable that condition was than the other. Listener ratings were compared with objective intelligibility scores obtained previously via orthographic transcriptions from nonexpert listeners. Receiver operating characteristic (ROC) curves and average magnitude of intelligibility difference per level of the GROC Scale were evaluated to determine the sensitivity, specificity, and accuracy of potential cutoff scores in intelligibility for establishing thresholds of important change.
Results: MCIDs derived from the ROC curves were invalid. However, the average magnitude of intelligibility difference derived valid and useful thresholds. The MCID of intelligibility was determined to be about 7% for a small amount of difference and about 15% for a large amount of difference.
Conclusions: This work demonstrates the feasibility of the novel experimental paradigm for collecting crowdsourced perceptual data to estimate MCIDs. Results provide empirical evidence that clinical tools for the perception of intelligibility by nonexpert listeners could consist of three categories, which emerged from the data ("no difference," "a little bit of difference," "a lot of difference"). The current work is a critical step toward development of a universal language with which to evaluate changes in intelligibility as a result of neurological injury, disease progression, and speech-language therapy.
{"title":"Progress Toward Estimating the Minimal Clinically Important Difference of Intelligibility: A Crowdsourced Perceptual Experiment.","authors":"Kaila L Stipancic, Frits van Brenk, Mengyang Qiu, Kris Tjaden","doi":"10.1044/2024_JSLHR-24-00354","DOIUrl":"10.1044/2024_JSLHR-24-00354","url":null,"abstract":"<p><strong>Purpose: </strong>The purpose of the current study was to estimate the minimal clinically important difference (MCID) of sentence intelligibility in control speakers and in speakers with dysarthria due to multiple sclerosis (MS) and Parkinson's disease (PD).</p><p><strong>Method: </strong>Sixteen control speakers, 16 speakers with MS, and 16 speakers with PD were audio-recorded reading aloud sentences in habitual, clear, fast, loud, and slow speaking conditions. Two hundred forty nonexpert crowdsourced listeners heard paired conditions of the same sentence content from a speaker and indicated if one condition was more understandable than another. Listeners then used the Global Ratings of Change (GROC) Scale to indicate <i>how much more understandable</i> that condition was than the other. Listener ratings were compared with objective intelligibility scores obtained previously via orthographic transcriptions from nonexpert listeners. Receiver operating characteristic (ROC) curves and average magnitude of intelligibility difference per level of the GROC Scale were evaluated to determine the sensitivity, specificity, and accuracy of potential cutoff scores in intelligibility for establishing thresholds of important change.</p><p><strong>Results: </strong>MCIDs derived from the ROC curves were invalid. However, the average magnitude of intelligibility difference derived valid and useful thresholds. The MCID of intelligibility was determined to be about 7% for a small amount of difference and about 15% for a large amount of difference.</p><p><strong>Conclusions: </strong>This work demonstrates the feasibility of the novel experimental paradigm for collecting crowdsourced perceptual data to estimate MCIDs. Results provide empirical evidence that clinical tools for the perception of intelligibility by nonexpert listeners could consist of three categories, which emerged from the data (\"no difference,\" \"a little bit of difference,\" \"a lot of difference\"). The current work is a critical step toward development of a universal language with which to evaluate changes in intelligibility as a result of neurological injury, disease progression, and speech-language therapy.</p>","PeriodicalId":51254,"journal":{"name":"Journal of Speech Language and Hearing Research","volume":" ","pages":"1-15"},"PeriodicalIF":2.2,"publicationDate":"2024-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142512654","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-24Epub Date: 2023-10-23DOI: 10.1044/2023_JSLHR-23-00056
Claudia I Abbiati, Kimberly R Bauerly, Shelley L Velleman
Purpose: Spatiotemporal index (STI) is a common measure of articulatory variability used to examine speech-motor control. However, the methods used to elicit productions for measuring STI have varied across studies. The aim of this study was to determine whether STI values are affected by changes in elicitation methods.
Method: Lip aperture STI (LA STI) was calculated for 19 monolingual English-speaking young adults based upon the production of four declarative sentences that varied by length and complexity. Using a 2 × 2 design, productions were elicited under the following two conditions: repetition type (consecutive vs. pseudorandom) and stimulus presentation type (auditory vs. combined auditory and visual). Conditions for eliciting productions were counterbalanced among participants.
Results: There was a main effect of repetition type (p < .001) and sentence type (p < .030). Pseudorandom repetitions resulted in higher mean LA STI values across sentence types compared to those computed from consecutive repetitions. There were no significant differences for stimulus presentation type. That is, no differences in mean LA STI were found between the auditory versus combined auditory and visual presentations.
Conclusions: Our findings show that the methods used to elicit sentence productions have a significant effect on LA STI values. Findings suggest that there is a need for researchers to consider these effects when designing methods for measuring LA STI.
{"title":"Speech Elicitation Methods for Measuring Articulatory Control.","authors":"Claudia I Abbiati, Kimberly R Bauerly, Shelley L Velleman","doi":"10.1044/2023_JSLHR-23-00056","DOIUrl":"10.1044/2023_JSLHR-23-00056","url":null,"abstract":"<p><strong>Purpose: </strong>Spatiotemporal index (STI) is a common measure of articulatory variability used to examine speech-motor control. However, the methods used to elicit productions for measuring STI have varied across studies. The aim of this study was to determine whether STI values are affected by changes in elicitation methods.</p><p><strong>Method: </strong>Lip aperture STI (LA STI) was calculated for 19 monolingual English-speaking young adults based upon the production of four declarative sentences that varied by length and complexity. Using a 2 × 2 design, productions were elicited under the following two conditions: repetition type (consecutive vs. pseudorandom) and stimulus presentation type (auditory vs. combined auditory and visual). Conditions for eliciting productions were counterbalanced among participants.</p><p><strong>Results: </strong>There was a main effect of repetition type (<i>p</i> < .001) and sentence type (<i>p</i> < .030). Pseudorandom repetitions resulted in higher mean LA STI values across sentence types compared to those computed from consecutive repetitions. There were no significant differences for stimulus presentation type. That is, no differences in mean LA STI were found between the auditory versus combined auditory and visual presentations.</p><p><strong>Conclusions: </strong>Our findings show that the methods used to elicit sentence productions have a significant effect on LA STI values. Findings suggest that there is a need for researchers to consider these effects when designing methods for measuring LA STI.</p>","PeriodicalId":51254,"journal":{"name":"Journal of Speech Language and Hearing Research","volume":" ","pages":"4107-4114"},"PeriodicalIF":2.2,"publicationDate":"2024-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11546903/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49693754","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-24Epub Date: 2024-07-12DOI: 10.1044/2024_JSLHR-23-00286
Justin D Dvorak, Frank R Boutsen
Purpose: Collaboration in the field of speech-language pathology occurs across a variety of digital devices and can entail the usage of multiple software tools, systems, file formats, and even programming languages. Unfortunately, gaps between the laboratory, clinic, and classroom can emerge in part because of siloing of data and workflows, as well as the digital divide between users. The purpose of this tutorial is to present the Collaboverse, a web-based collaborative system that unifies these domains, and describe the application of this tool to common tasks in speech-language pathology. In addition, we demonstrate its utility in machine learning (ML) applications.
Method: This tutorial outlines key concepts in the digital divide, data management, distributed computing, and ML. It introduces the Collaboverse workspace for researchers, clinicians, and educators in speech-language pathology who wish to improve their collaborative network and leverage advanced computation abilities. It also details an ML approach to prosodic analysis.
Conclusions: The Collaboverse shows promise in narrowing the digital divide and is capable of generating clinically relevant data, specifically in the area of prosody, whose computational complexity has limited widespread analysis in research and clinic alike. In addition, it includes an augmentative and alternative communication app allowing visual, nontextual communication.
目的:言语病理学领域的协作是通过各种数字设备进行的,可能需要使用多种软件工具、系统、文件格式甚至编程语言。遗憾的是,由于数据和工作流程的孤岛化以及用户之间的数字鸿沟,实验室、诊所和教室之间可能会出现隔阂。本教程的目的是介绍 Collaboverse,这是一个基于网络的协作系统,可以将这些领域统一起来,并介绍该工具在言语病理学常见任务中的应用。此外,我们还展示了它在机器学习(ML)应用中的实用性:本教程概述了数字鸿沟、数据管理、分布式计算和 ML 的关键概念。它介绍了 Collaboverse 工作空间,适用于希望改善协作网络和利用高级计算能力的语言病理学研究人员、临床医生和教育工作者。报告还详细介绍了一种用于前音分析的 ML 方法:Collaboverse 有望缩小数字鸿沟,并能生成与临床相关的数据,特别是在前音领域。此外,它还包括一个辅助和替代性交流应用程序,允许进行可视化、非文本交流。
{"title":"The Collaboverse: A Collaborative Data-Sharing and Speech Analysis Platform.","authors":"Justin D Dvorak, Frank R Boutsen","doi":"10.1044/2024_JSLHR-23-00286","DOIUrl":"10.1044/2024_JSLHR-23-00286","url":null,"abstract":"<p><strong>Purpose: </strong>Collaboration in the field of speech-language pathology occurs across a variety of digital devices and can entail the usage of multiple software tools, systems, file formats, and even programming languages. Unfortunately, gaps between the laboratory, clinic, and classroom can emerge in part because of siloing of data and workflows, as well as the digital divide between users. The purpose of this tutorial is to present the Collaboverse, a web-based collaborative system that unifies these domains, and describe the application of this tool to common tasks in speech-language pathology. In addition, we demonstrate its utility in machine learning (ML) applications.</p><p><strong>Method: </strong>This tutorial outlines key concepts in the digital divide, data management, distributed computing, and ML. It introduces the Collaboverse workspace for researchers, clinicians, and educators in speech-language pathology who wish to improve their collaborative network and leverage advanced computation abilities. It also details an ML approach to prosodic analysis.</p><p><strong>Conclusions: </strong>The Collaboverse shows promise in narrowing the digital divide and is capable of generating clinically relevant data, specifically in the area of prosody, whose computational complexity has limited widespread analysis in research and clinic alike. In addition, it includes an augmentative and alternative communication app allowing visual, nontextual communication.</p>","PeriodicalId":51254,"journal":{"name":"Journal of Speech Language and Hearing Research","volume":" ","pages":"4137-4156"},"PeriodicalIF":2.2,"publicationDate":"2024-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141602150","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-24Epub Date: 2023-12-05DOI: 10.1044/2023_JSLHR-23-00081
Sarah Horton, Victoria Jackson, Jessica Boyce, Marie-Christine Franken, Stephanie Siemers, Miya St John, Stephen Hearps, Olivia van Reyk, Ruth Braden, Richard Parker, Adam P Vogel, Else Eising, David J Amor, Janelle Irvine, Simon E Fisher, Nicholas G Martin, Sheena Reilly, Melanie Bahlo, Ingrid Scheffer, Angela Morgan
Purpose: To our knowledge, there are no data examining the agreement between self-reported and clinician-rated stuttering severity. In the era of big data, self-reported ratings have great potential utility for large-scale data collection, where cost and time preclude in-depth assessment by a clinician. Equally, there is increasing emphasis on the need to recognize an individual's experience of their own condition. Here, we examined the agreement between self-reported stuttering severity compared to clinician ratings during a speech assessment. As a secondary objective, we determined whether self-reported stuttering severity correlated with an individual's subjective impact of stuttering.
Method: Speech-language pathologists conducted face-to-face speech assessments with 195 participants (137 males) aged 5-84 years, recruited from a cohort of people with self-reported stuttering. Stuttering severity was rated on a 10-point scale by the participant and by two speech-language pathologists. Participants also completed the Overall Assessment of the Subjective Experience of Stuttering (OASES). Clinician and participant ratings were compared. The association between stuttering severity and the OASES scores was examined.
Results: There was a strong positive correlation between speech-language pathologist and participant-reported ratings of stuttering severity. Participant-reported stuttering severity correlated weakly with the four OASES domains and with the OASES overall impact score.
Conclusions: Participants were able to accurately rate their stuttering severity during a speech assessment using a simple one-item question. This finding indicates that self-report stuttering severity is a suitable method for large-scale data collection. Findings also support the collection of self-report subjective experience data using questionnaires, such as the OASES, which add vital information about the participants' experience of stuttering that is not captured by overt speech severity ratings alone.
{"title":"Self-Reported Stuttering Severity Is Accurate: Informing Methods for Large-Scale Data Collection in Stuttering.","authors":"Sarah Horton, Victoria Jackson, Jessica Boyce, Marie-Christine Franken, Stephanie Siemers, Miya St John, Stephen Hearps, Olivia van Reyk, Ruth Braden, Richard Parker, Adam P Vogel, Else Eising, David J Amor, Janelle Irvine, Simon E Fisher, Nicholas G Martin, Sheena Reilly, Melanie Bahlo, Ingrid Scheffer, Angela Morgan","doi":"10.1044/2023_JSLHR-23-00081","DOIUrl":"10.1044/2023_JSLHR-23-00081","url":null,"abstract":"<p><strong>Purpose: </strong>To our knowledge, there are no data examining the agreement between self-reported and clinician-rated stuttering severity. In the era of big data, self-reported ratings have great potential utility for large-scale data collection, where cost and time preclude in-depth assessment by a clinician. Equally, there is increasing emphasis on the need to recognize an individual's experience of their own condition. Here, we examined the agreement between self-reported stuttering severity compared to clinician ratings during a speech assessment. As a secondary objective, we determined whether self-reported stuttering severity correlated with an individual's subjective impact of stuttering.</p><p><strong>Method: </strong>Speech-language pathologists conducted face-to-face speech assessments with 195 participants (137 males) aged 5-84 years, recruited from a cohort of people with self-reported stuttering. Stuttering severity was rated on a 10-point scale by the participant and by two speech-language pathologists. Participants also completed the Overall Assessment of the Subjective Experience of Stuttering (OASES). Clinician and participant ratings were compared. The association between stuttering severity and the OASES scores was examined.</p><p><strong>Results: </strong>There was a strong positive correlation between speech-language pathologist and participant-reported ratings of stuttering severity. Participant-reported stuttering severity correlated weakly with the four OASES domains and with the OASES overall impact score.</p><p><strong>Conclusions: </strong>Participants were able to accurately rate their stuttering severity during a speech assessment using a simple one-item question. This finding indicates that self-report stuttering severity is a suitable method for large-scale data collection. Findings also support the collection of self-report subjective experience data using questionnaires, such as the OASES, which add vital information about the participants' experience of stuttering that is not captured by overt speech severity ratings alone.</p>","PeriodicalId":51254,"journal":{"name":"Journal of Speech Language and Hearing Research","volume":" ","pages":"4015-4024"},"PeriodicalIF":2.2,"publicationDate":"2024-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138489077","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-24Epub Date: 2024-09-20DOI: 10.1044/2024_JSLHR-23-00591
Ben Maassen, Hayo Terband
Background: Children with speech sound disorders (SSD) form a heterogeneous group, with respect to severity, etiology, proximal causes, speech error characteristics, and response to treatment. Infants develop speech and language in interaction with neurological maturation and general perceptual, motoric, and cognitive skills in a social-emotional context.
Purpose: After a brief introduction into psycholinguistic models of speech production and levels of causation, in this review article, we present an in-depth overview of mechanisms and processes, and the dynamics thereof, which are crucial in typical speech development. These basic mechanisms and processes are: (a) neurophysiological motor refinement, that is, the maturational articulatory mechanisms that drive babbling and the more differentiated production of larger speech patterns; (b) sensorimotor integration, which forms the steering function from phonetics to phonology; and (c) motor hierarchy and articulatory phonology describing the gestural organization of syllables, which underlie fluent speech production. These dynamics have consequences for the diagnosis and further analysis of SSD in children. We argue that current diagnostic classification systems do not do justice to the multilevel, multifactorial, and interactive character of the underlying mechanisms and processes. This is illustrated by a recent Dutch study yielding distinct performance profiles among children with SSD, which allows for a dimensional interpretation of underlying processing deficits.
Conclusions: Analyses of mainstream treatments with respect to the treatment goals and the speech mechanisms addressed show that treatment programs are quite transparent in their aims and approach and how they contribute to remediating specific deficits or mechanisms. Recent studies into clinical reasoning reveal that the clinical challenge for speech-language pathologists is how to select the most appropriate treatment at the most appropriate time for each individual child with SSD. We argue that a process-oriented approach has merits as compared to categorical diagnostics as a toolbox to aid in the interpretation of the speech profile in terms of underlying deficits and to connect these to a specific intervention approach and treatment target.
{"title":"Toward Process-Oriented, Dimensional Approaches for Diagnosis and Treatment of Speech Sound Disorders in Children: Position Statement and Future Perspectives.","authors":"Ben Maassen, Hayo Terband","doi":"10.1044/2024_JSLHR-23-00591","DOIUrl":"10.1044/2024_JSLHR-23-00591","url":null,"abstract":"<p><strong>Background: </strong>Children with speech sound disorders (SSD) form a heterogeneous group, with respect to severity, etiology, proximal causes, speech error characteristics, and response to treatment. Infants develop speech and language in interaction with neurological maturation and general perceptual, motoric, and cognitive skills in a social-emotional context.</p><p><strong>Purpose: </strong>After a brief introduction into psycholinguistic models of speech production and levels of causation, in this review article, we present an in-depth overview of mechanisms and processes, and the dynamics thereof, which are crucial in typical speech development. These basic mechanisms and processes are: (a) neurophysiological motor refinement, that is, the maturational articulatory mechanisms that drive babbling and the more differentiated production of larger speech patterns; (b) sensorimotor integration, which forms the steering function from phonetics to phonology; and (c) motor hierarchy and articulatory phonology describing the gestural organization of syllables, which underlie fluent speech production. These dynamics have consequences for the diagnosis and further analysis of SSD in children. We argue that current diagnostic classification systems do not do justice to the multilevel, multifactorial, and interactive character of the underlying mechanisms and processes. This is illustrated by a recent Dutch study yielding distinct performance profiles among children with SSD, which allows for a dimensional interpretation of underlying processing deficits.</p><p><strong>Conclusions: </strong>Analyses of mainstream treatments with respect to the treatment goals and the speech mechanisms addressed show that treatment programs are quite transparent in their aims and approach and how they contribute to remediating specific deficits or mechanisms. Recent studies into clinical reasoning reveal that the clinical challenge for speech-language pathologists is how to select the most appropriate treatment at the most appropriate time for each individual child with SSD. We argue that a process-oriented approach has merits as compared to categorical diagnostics as a toolbox to aid in the interpretation of the speech profile in terms of underlying deficits and to connect these to a specific intervention approach and treatment target.</p>","PeriodicalId":51254,"journal":{"name":"Journal of Speech Language and Hearing Research","volume":" ","pages":"4115-4136"},"PeriodicalIF":2.2,"publicationDate":"2024-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142300396","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-24Epub Date: 2023-09-07DOI: 10.1044/2023_JSLHR-23-00087
Anna Huynh, Kerry Adams, Carolina Barnett-Tapia, Sanjay Kalra, Lorne Zinman, Yana Yunusova
Purpose: This study sought to explore how patients with amyotrophic lateral sclerosis (ALS) presenting with coexisting bulbar and cognitive impairments and their caregivers experienced the speech-language pathologist (SLP) services provided in multidisciplinary ALS clinics in Canada and identified their perceived needs for bulbar symptom management.
Method: This qualitative study was informed by interpretive description. Seven interviews were conducted with patients with severe bulbar dysfunction or severe bulbar and cognitive dysfunction due to ALS or ALS-frontotemporal dementia, respectively, and/or their caregivers. Purposive sampling was used to recruit individuals with severe bulbar or bulbar and cognitive disease. Thematic analysis was used to analyze interview data.
Results: Patients and caregivers reported difficulties with accessing and receiving SLP services at the multidisciplinary ALS clinic. These difficulties were further exacerbated in those with severe cognitive disease. Participants expressed a need for more specific (i.e., disease and service-related) information and personalized care to address their changing needs and preferences. Engaging caregivers earlier in SLP appointments was perceived as vital to support care planning and provide in-time caregiver education.
Conclusions: This study highlighted the challenges experienced by patients and caregivers in accessing and receiving SLP services. There is a pressing need for a more person-centered approach to ALS care and a continuing need for education of SLPs on care provision in cases of complex multisymptom diseases within a multidisciplinary ALS clinic.
目的:本研究旨在探究肌萎缩性脊髓侧索硬化症(ALS)患者及其护理人员如何体验加拿大多学科 ALS 诊所提供的言语病理学家(SLP)服务,并确定他们对肌肉症状管理的感知需求:这项定性研究以解释性描述为基础。研究人员对因 ALS 或 ALS-额颞叶痴呆而分别患有严重球部功能障碍或严重球部和认知功能障碍的患者和/或其护理人员进行了七次访谈。在招募严重球部或球部和认知疾病患者时采用了有目的的抽样。采用主题分析法对访谈数据进行分析:结果:患者和护理人员表示,在多学科 ALS 诊所获得和接受 SLP 服务存在困难。这些困难在患有严重认知疾病的患者身上进一步加剧。参与者表示需要更具体(即疾病和服务相关)的信息和个性化护理,以满足他们不断变化的需求和偏好。他们认为,让护理人员更早地参与 SLP 预约对于支持护理规划和提供及时的护理人员教育至关重要:本研究强调了患者和护理人员在获取和接受 SLP 服务时所遇到的挑战。目前迫切需要一种更加以人为本的 ALS 护理方法,并需要在多学科 ALS 诊所内继续对 SLP 进行有关复杂的多症状疾病护理的教育。补充材料:https://doi.org/10.23641/asha.24069222。
{"title":"Accessing and Receiving Speech-Language Pathology Services at the Multidisciplinary Amyotrophic Lateral Sclerosis Clinic: An Exploratory Qualitative Study of Patient Experiences and Needs.","authors":"Anna Huynh, Kerry Adams, Carolina Barnett-Tapia, Sanjay Kalra, Lorne Zinman, Yana Yunusova","doi":"10.1044/2023_JSLHR-23-00087","DOIUrl":"10.1044/2023_JSLHR-23-00087","url":null,"abstract":"<p><strong>Purpose: </strong>This study sought to explore how patients with amyotrophic lateral sclerosis (ALS) presenting with coexisting bulbar and cognitive impairments and their caregivers experienced the speech-language pathologist (SLP) services provided in multidisciplinary ALS clinics in Canada and identified their perceived needs for bulbar symptom management.</p><p><strong>Method: </strong>This qualitative study was informed by interpretive description. Seven interviews were conducted with patients with severe bulbar dysfunction or severe bulbar and cognitive dysfunction due to ALS or ALS-frontotemporal dementia, respectively, and/or their caregivers. Purposive sampling was used to recruit individuals with severe bulbar or bulbar and cognitive disease. Thematic analysis was used to analyze interview data.</p><p><strong>Results: </strong>Patients and caregivers reported difficulties with accessing and receiving SLP services at the multidisciplinary ALS clinic. These difficulties were further exacerbated in those with severe cognitive disease. Participants expressed a need for more specific (i.e., disease and service-related) information and personalized care to address their changing needs and preferences. Engaging caregivers earlier in SLP appointments was perceived as vital to support care planning and provide in-time caregiver education.</p><p><strong>Conclusions: </strong>This study highlighted the challenges experienced by patients and caregivers in accessing and receiving SLP services. There is a pressing need for a more person-centered approach to ALS care and a continuing need for education of SLPs on care provision in cases of complex multisymptom diseases within a multidisciplinary ALS clinic.</p><p><strong>Supplemental material: </strong>https://doi.org/10.23641/asha.24069222.</p>","PeriodicalId":51254,"journal":{"name":"Journal of Speech Language and Hearing Research","volume":" ","pages":"4025-4037"},"PeriodicalIF":2.2,"publicationDate":"2024-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11547048/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10184562","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-24Epub Date: 2023-11-16DOI: 10.1044/2023_JSLHR-23-00112
Raphael Werner, Susanne Fuchs, Jürgen Trouvain, Steffen Kürbis, Bernd Möbius, Peter Birkholz
Purpose: Breathing is ubiquitous in speech production, crucial for structuring speech, and a potential diagnostic indicator for respiratory diseases. However, the acoustic characteristics of speech breathing remain underresearched. This work aims to characterize the spectral properties of human inhalation noises in a large speaker sample and explore their potential similarities with speech sounds. Speech sounds are mostly realized with egressive airflow. To account for this, we investigated the effect of airflow direction (inhalation vs. exhalation) on acoustic properties of certain vocal tract (VT) configurations.
Method: To characterize human inhalation, we describe spectra of breath noises produced by human speakers from two data sets comprising 34 female and 100 male participants. To investigate the effect of airflow direction, three-dimensional-printed VT models of a male and a female speaker with static VT configurations of four vowels and four fricatives were used. An airstream was directed through these VT configurations in both directions, and their spectral consequences were analyzed.
Results: For human inhalations, we found spectra with a decreasing slope and several weak peaks below 3 kHz. These peaks show moderate (female) to strong (male) overlap with resonances found for participants inhaling with a VT configuration of a central vowel. Results for the VT models suggest that airflow direction is crucial for spectral properties of sibilants, /ç/, and /i:/, but not the other sounds we investigated. Inhalation noise is most similar to /ə/ where airflow direction does not play a role.
Conclusions: Inhalation is realized on ingressive airflow, and inhalation noises have specific resonance properties that are most similar to /ə/ but occur without phonation. Airflow direction does not play a role in this specific VT configuration, but subglottal resonances may do. For future work, we suggest investigating the articulation of speech breathing and link it to current work on pause postures.
{"title":"Acoustics of Breath Noises in Human Speech: Descriptive and Three-Dimensional Modeling Approaches.","authors":"Raphael Werner, Susanne Fuchs, Jürgen Trouvain, Steffen Kürbis, Bernd Möbius, Peter Birkholz","doi":"10.1044/2023_JSLHR-23-00112","DOIUrl":"10.1044/2023_JSLHR-23-00112","url":null,"abstract":"<p><strong>Purpose: </strong>Breathing is ubiquitous in speech production, crucial for structuring speech, and a potential diagnostic indicator for respiratory diseases. However, the acoustic characteristics of speech breathing remain underresearched. This work aims to characterize the spectral properties of human inhalation noises in a large speaker sample and explore their potential similarities with speech sounds. Speech sounds are mostly realized with egressive airflow. To account for this, we investigated the effect of airflow direction (inhalation vs. exhalation) on acoustic properties of certain vocal tract (VT) configurations.</p><p><strong>Method: </strong>To characterize human inhalation, we describe spectra of breath noises produced by human speakers from two data sets comprising 34 female and 100 male participants. To investigate the effect of airflow direction, three-dimensional-printed VT models of a male and a female speaker with static VT configurations of four vowels and four fricatives were used. An airstream was directed through these VT configurations in both directions, and their spectral consequences were analyzed.</p><p><strong>Results: </strong>For human inhalations, we found spectra with a decreasing slope and several weak peaks below 3 kHz. These peaks show moderate (female) to strong (male) overlap with resonances found for participants inhaling with a VT configuration of a central vowel. Results for the VT models suggest that airflow direction is crucial for spectral properties of sibilants, /ç/, and /i:/, but not the other sounds we investigated. Inhalation noise is most similar to /ə/ where airflow direction does not play a role.</p><p><strong>Conclusions: </strong>Inhalation is realized on ingressive airflow, and inhalation noises have specific resonance properties that are most similar to /ə/ but occur without phonation. Airflow direction does not play a role in this specific VT configuration, but subglottal resonances may do. For future work, we suggest investigating the articulation of speech breathing and link it to current work on pause postures.</p><p><strong>Supplemental material: </strong>https://doi.org/10.23641/asha.24520585.</p>","PeriodicalId":51254,"journal":{"name":"Journal of Speech Language and Hearing Research","volume":" ","pages":"3947-3961"},"PeriodicalIF":2.2,"publicationDate":"2024-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136400235","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}