An open dataset of connected speech in aphasia with consensus ratings of auditory-perceptual features.
IF 2.2 Q3 COMPUTER SCIENCE, INFORMATION SYSTEMSDataPub Date : 2022-11-01Epub Date: 2022-10-30DOI:10.3390/data7110148
Zoe Ezzes, Sarah M Schneck, Marianne Casilio, Davida Fromm, Antje Mefford, Michael R de Riesthal, Stephen M Wilson
{"title":"An open dataset of connected speech in aphasia with consensus ratings of auditory-perceptual features.","authors":"Zoe Ezzes, Sarah M Schneck, Marianne Casilio, Davida Fromm, Antje Mefford, Michael R de Riesthal, Stephen M Wilson","doi":"10.3390/data7110148","DOIUrl":null,"url":null,"abstract":"<p><strong>Purpose: </strong>Auditory-perceptual rating of connected speech in aphasia (APROCSA) involves trained listeners rating a large number of perceptual features of speech samples, and has shown promise as an approach for quantifying expressive speech and language function in individuals with aphasia. The aim of this study was to obtain consensus ratings for a diverse set of speech samples, which can then be used as training materials for learning the APROCSA system.</p><p><strong>Method: </strong>Connected speech samples were recorded from six individuals with chronic post-stroke aphasia. A segment containing the first five minutes of participant speech was excerpted from each sample, and 27 features were rated on a five-point scale by five researchers. The researchers then discussed each feature in turn to obtain consensus ratings.</p><p><strong>Results: </strong>Six connected speech samples are made freely available for research, education, and clinical uses. Consensus ratings are reported for each of the 27 features, for each speech sample. Discrepancies between raters were resolved through discussion, yielding consensus ratings that can be expected to be more accurate than mean ratings.</p><p><strong>Conclusions: </strong>The dataset will provide a useful resource for scientists, students, and clinicians to learn how to evaluate aphasic speech samples with an auditory-perceptual approach.</p>","PeriodicalId":36824,"journal":{"name":"Data","volume":"7 11","pages":""},"PeriodicalIF":2.2000,"publicationDate":"2022-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10617630/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Data","FirstCategoryId":"90","ListUrlMain":"https://doi.org/10.3390/data7110148","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2022/10/30 0:00:00","PubModel":"Epub","JCR":"Q3","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0
Abstract
Purpose: Auditory-perceptual rating of connected speech in aphasia (APROCSA) involves trained listeners rating a large number of perceptual features of speech samples, and has shown promise as an approach for quantifying expressive speech and language function in individuals with aphasia. The aim of this study was to obtain consensus ratings for a diverse set of speech samples, which can then be used as training materials for learning the APROCSA system.
Method: Connected speech samples were recorded from six individuals with chronic post-stroke aphasia. A segment containing the first five minutes of participant speech was excerpted from each sample, and 27 features were rated on a five-point scale by five researchers. The researchers then discussed each feature in turn to obtain consensus ratings.
Results: Six connected speech samples are made freely available for research, education, and clinical uses. Consensus ratings are reported for each of the 27 features, for each speech sample. Discrepancies between raters were resolved through discussion, yielding consensus ratings that can be expected to be more accurate than mean ratings.
Conclusions: The dataset will provide a useful resource for scientists, students, and clinicians to learn how to evaluate aphasic speech samples with an auditory-perceptual approach.