Test validation in interpreter certification performance testing: An argument-based approach

IF 1.8 1区 文学 0 LANGUAGE & LINGUISTICS Interpreting Pub Date : 2016-01-01 DOI:10.1075/INTP.18.2.04HAN
Chao Han, Helen Slatyer
{"title":"Test validation in interpreter certification performance testing: An argument-based approach","authors":"Chao Han, Helen Slatyer","doi":"10.1075/INTP.18.2.04HAN","DOIUrl":null,"url":null,"abstract":"Over the past decade, interpreter certification performance testing has gained momentum. Certification tests often involve high stakes, since they can play an important role in regulating access to professional practice and serve to provide a measure of professional competence for end users. The decision to award certification is based on inferences from candidates’ test scores about their knowledge, skills and abilities, as well as their interpreting performance in a given target domain. To justify the appropriateness of score-based inferences and actions, test developers need to provide evidence that the test is valid and reliable through a process of test validation. However, there is little evidence that test qualities are systematically evaluated in interpreter certification testing. In an attempt to address this problem, this paper proposes a theoretical argument-based validation framework for interpreter certification performance tests so as to guide testers in carrying out systematic validation research. Before presenting the framework, validity theory is reviewed, and an examination of the argument-based approach to validation is provided. A validity argument for interpreter tests is then proposed, with hypothesized validity evidence. Examples of evidence are drawn from relevant empirical work, where available. Gaps in the available evidence are highlighted and suggestions for research are made.","PeriodicalId":51746,"journal":{"name":"Interpreting","volume":"18 1","pages":"231-258"},"PeriodicalIF":1.8000,"publicationDate":"2016-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1075/INTP.18.2.04HAN","citationCount":"30","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Interpreting","FirstCategoryId":"98","ListUrlMain":"https://doi.org/10.1075/INTP.18.2.04HAN","RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"0","JCRName":"LANGUAGE & LINGUISTICS","Score":null,"Total":0}
引用次数: 30

Abstract

Over the past decade, interpreter certification performance testing has gained momentum. Certification tests often involve high stakes, since they can play an important role in regulating access to professional practice and serve to provide a measure of professional competence for end users. The decision to award certification is based on inferences from candidates’ test scores about their knowledge, skills and abilities, as well as their interpreting performance in a given target domain. To justify the appropriateness of score-based inferences and actions, test developers need to provide evidence that the test is valid and reliable through a process of test validation. However, there is little evidence that test qualities are systematically evaluated in interpreter certification testing. In an attempt to address this problem, this paper proposes a theoretical argument-based validation framework for interpreter certification performance tests so as to guide testers in carrying out systematic validation research. Before presenting the framework, validity theory is reviewed, and an examination of the argument-based approach to validation is provided. A validity argument for interpreter tests is then proposed, with hypothesized validity evidence. Examples of evidence are drawn from relevant empirical work, where available. Gaps in the available evidence are highlighted and suggestions for research are made.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
解释器认证性能测试中的测试验证:基于参数的方法
在过去的十年中,口译员认证性能测试获得了发展势头。认证测试通常涉及高风险,因为它们可以在规范获得专业实践方面发挥重要作用,并为最终用户提供专业能力的衡量标准。授予认证的决定是基于对候选人的知识、技能和能力的考试分数的推断,以及他们在给定目标领域的口译表现。为了证明基于分数的推断和操作的适当性,测试开发人员需要通过测试验证过程提供测试有效和可靠的证据。然而,很少有证据表明口译员认证考试中对考试质量进行了系统的评估。为了解决这一问题,本文提出了一个基于理论论证的解释器认证性能测试验证框架,以指导测试人员进行系统的验证研究。在提出框架之前,有效性理论进行了回顾,并提供了基于论证的验证方法的检查。然后提出了翻译测试的效度论证,并提出了假设的效度证据。证据的例子来自相关的经验工作,如果有的话。强调现有证据中的差距,并提出研究建议。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
Interpreting
Interpreting Multiple-
CiteScore
5.30
自引率
15.80%
发文量
21
期刊介绍: Interpreting serves as a medium for research and debate on all aspects of interpreting, in its various modes, modalities (spoken and signed) and settings (conferences, media, courtroom, healthcare and others). Striving to promote our understanding of the socio-cultural, cognitive and linguistic dimensions of interpreting as an activity and process, the journal covers theoretical and methodological concerns, explores the history and professional ecology of interpreting and its role in society, and addresses current issues in professional practice and training.
期刊最新文献
Language and power Review of Gavioli & Wadensjö (2023): The Routledge handbook of public service interpreting Coordination in telephone-based remote interpreting Explicitation and cognitive load in simultaneous interpreting How much noise can you make through an interpreter?
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1