{"title":"Holistic versus analytic scoring of spoken-language interpreting: a multi-perspectival comparative analysis","authors":"Jing Chen, Huabo Yang, Chao Han","doi":"10.1080/1750399X.2022.2084667","DOIUrl":null,"url":null,"abstract":"ABSTRACT Rubric scoring has been gaining traction as an emergent method to assess spoken-language interpreting, with two of the most well-known methods being rating scale-based holistic and analytic scoring. While the former provides a single global score, the latter generates separate scores on different dimensions of interpreting performance. Despite the growing use of the two methods, there has been little research documenting their uses in interpreting assessment. We therefore conducted the present study to find out how scoring methods (i.e. holistic versus analytic) would affect the dependability of rater-generated scores, rater behaviour, assessment outcomes, and rater perceptions. Overall, our quantitative data analysis indicates that although the two methods rank-ordered performances similarly, the holistic scoring led to relatively higher score dependability, regardless of interpreting directions, and that the raters’ assessments of interpreting into their less dominant language were less dependable. Our content analysis of the qualitative data reveals raters’ concerns with the substantive meaning of holistic scores and the design of analytic descriptors. We discussed these findings in light of available literature on interpreting assessment. By doing so, we hope to provide some evidential basis for scale selection in rater-mediated assessment of spoken-language interpreting.","PeriodicalId":45693,"journal":{"name":"Interpreter and Translator Trainer","volume":"16 1","pages":"558 - 576"},"PeriodicalIF":1.8000,"publicationDate":"2022-06-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Interpreter and Translator Trainer","FirstCategoryId":"98","ListUrlMain":"https://doi.org/10.1080/1750399X.2022.2084667","RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"0","JCRName":"LANGUAGE & LINGUISTICS","Score":null,"Total":0}
引用次数: 0
Abstract
ABSTRACT Rubric scoring has been gaining traction as an emergent method to assess spoken-language interpreting, with two of the most well-known methods being rating scale-based holistic and analytic scoring. While the former provides a single global score, the latter generates separate scores on different dimensions of interpreting performance. Despite the growing use of the two methods, there has been little research documenting their uses in interpreting assessment. We therefore conducted the present study to find out how scoring methods (i.e. holistic versus analytic) would affect the dependability of rater-generated scores, rater behaviour, assessment outcomes, and rater perceptions. Overall, our quantitative data analysis indicates that although the two methods rank-ordered performances similarly, the holistic scoring led to relatively higher score dependability, regardless of interpreting directions, and that the raters’ assessments of interpreting into their less dominant language were less dependable. Our content analysis of the qualitative data reveals raters’ concerns with the substantive meaning of holistic scores and the design of analytic descriptors. We discussed these findings in light of available literature on interpreting assessment. By doing so, we hope to provide some evidential basis for scale selection in rater-mediated assessment of spoken-language interpreting.