Adductor Laryngeal Dystonia Versus Muscle Tension Dysphonia: Examining the Utility of Automated Acoustic Analysis to Detect Task Dependency as a Distinguishing Feature.
Nelson Roy,Shaheen N Awan,Skyler Jennings,Jenna Jensen,Ray M Merrill
{"title":"Adductor Laryngeal Dystonia Versus Muscle Tension Dysphonia: Examining the Utility of Automated Acoustic Analysis to Detect Task Dependency as a Distinguishing Feature.","authors":"Nelson Roy,Shaheen N Awan,Skyler Jennings,Jenna Jensen,Ray M Merrill","doi":"10.1044/2024_jslhr-24-00104","DOIUrl":null,"url":null,"abstract":"OBJECTIVE\r\nDifferentiating adductor laryngeal dystonia (ADLD) and primary muscle tension dysphonia (pMTD) can be challenging. Unlike pMTD, ADLD is described as \"task-dependent\" with voiced phonemes purportedly provoking greater sign expression than voiceless phonemes. We evaluated the ability of two automated acoustic measures, the Cepstral Spectral Index of Dysphonia (CSID) and creak, to detect task dependency and to discriminate ADLD and pMTD.\r\n\r\nMETHOD\r\nCSID, % creak, and listener ratings of dysphonia severity were obtained from audio recordings of patients with ADLD (n = 29) or pMTD (n = 33) reading two sentences loaded with either voiced or voiceless phonemes.\r\n\r\nRESULTS\r\nGroup × Sentence Type interaction effects confirmed that both \"normalized\" CSID and % creak detected task-dependent sign expression in ADLD (i.e., worse symptoms on the voiced- vs. voiceless-loaded sentence). However, a stepwise binary logistic regression analysis with group (ADLD vs. pMTD) as the dependent variable and % creak and normalized CSID variables (voiced, voiceless, and voiced vs. voiceless difference) as covariates revealed that the normalized CSID voiceless-laden sentence z score was the only significant predictor of group membership. Estimates of diagnostic precision from the normalized CSID voiceless sentence z scores were superior to % creak or listener ratings. Finally, the CSID possessed the strongest correlations with listener severity ratings regardless of group or sentence type.\r\n\r\nCONCLUSIONS\r\nAlthough both normalized CSID and % creak detected task-dependent performance as a distinguishing feature of ADLD, a CSID profile wherein (a) the voiceless sentence z score was less severe than the voiced sentence and (b) the normalized voiceless sentence z score was within approximately 2 SDs (or less) of typical expectations provided the best estimates of diagnostic precision. Automated acoustic measures such as the CSID and creak provide useful information to objectively discriminate ADLD and pMTD.","PeriodicalId":51254,"journal":{"name":"Journal of Speech Language and Hearing Research","volume":null,"pages":null},"PeriodicalIF":2.2000,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Speech Language and Hearing Research","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1044/2024_jslhr-24-00104","RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"AUDIOLOGY & SPEECH-LANGUAGE PATHOLOGY","Score":null,"Total":0}
引用次数: 0
Abstract
OBJECTIVE
Differentiating adductor laryngeal dystonia (ADLD) and primary muscle tension dysphonia (pMTD) can be challenging. Unlike pMTD, ADLD is described as "task-dependent" with voiced phonemes purportedly provoking greater sign expression than voiceless phonemes. We evaluated the ability of two automated acoustic measures, the Cepstral Spectral Index of Dysphonia (CSID) and creak, to detect task dependency and to discriminate ADLD and pMTD.
METHOD
CSID, % creak, and listener ratings of dysphonia severity were obtained from audio recordings of patients with ADLD (n = 29) or pMTD (n = 33) reading two sentences loaded with either voiced or voiceless phonemes.
RESULTS
Group × Sentence Type interaction effects confirmed that both "normalized" CSID and % creak detected task-dependent sign expression in ADLD (i.e., worse symptoms on the voiced- vs. voiceless-loaded sentence). However, a stepwise binary logistic regression analysis with group (ADLD vs. pMTD) as the dependent variable and % creak and normalized CSID variables (voiced, voiceless, and voiced vs. voiceless difference) as covariates revealed that the normalized CSID voiceless-laden sentence z score was the only significant predictor of group membership. Estimates of diagnostic precision from the normalized CSID voiceless sentence z scores were superior to % creak or listener ratings. Finally, the CSID possessed the strongest correlations with listener severity ratings regardless of group or sentence type.
CONCLUSIONS
Although both normalized CSID and % creak detected task-dependent performance as a distinguishing feature of ADLD, a CSID profile wherein (a) the voiceless sentence z score was less severe than the voiced sentence and (b) the normalized voiceless sentence z score was within approximately 2 SDs (or less) of typical expectations provided the best estimates of diagnostic precision. Automated acoustic measures such as the CSID and creak provide useful information to objectively discriminate ADLD and pMTD.
期刊介绍:
Mission: JSLHR publishes peer-reviewed research and other scholarly articles on the normal and disordered processes in speech, language, hearing, and related areas such as cognition, oral-motor function, and swallowing. The journal is an international outlet for both basic research on communication processes and clinical research pertaining to screening, diagnosis, and management of communication disorders as well as the etiologies and characteristics of these disorders. JSLHR seeks to advance evidence-based practice by disseminating the results of new studies as well as providing a forum for critical reviews and meta-analyses of previously published work.
Scope: The broad field of communication sciences and disorders, including speech production and perception; anatomy and physiology of speech and voice; genetics, biomechanics, and other basic sciences pertaining to human communication; mastication and swallowing; speech disorders; voice disorders; development of speech, language, or hearing in children; normal language processes; language disorders; disorders of hearing and balance; psychoacoustics; and anatomy and physiology of hearing.