Expanding Kane's argument-based validity framework: What can validation practices in language assessment offer health professions education?

IF 4.9 1区 教育学 Q1 EDUCATION, SCIENTIFIC DISCIPLINES Medical Education Pub Date : 2024-06-13 DOI:10.1111/medu.15452
David Wei Dai, Thao Vu, Ute Knoch, Angelina S Lim, Daniel Thomas Malone, Vivienne Mak
{"title":"Expanding Kane's argument-based validity framework: What can validation practices in language assessment offer health professions education?","authors":"David Wei Dai, Thao Vu, Ute Knoch, Angelina S Lim, Daniel Thomas Malone, Vivienne Mak","doi":"10.1111/medu.15452","DOIUrl":null,"url":null,"abstract":"<p><strong>Context: </strong>One central consideration in health professions education (HPE) is to ensure we are making sound and justifiable decisions based on the assessment instruments we use on health professionals. To achieve this goal, HPE assessment researchers have drawn on Kane's argument-based framework to ascertain the validity of their assessment tools. However, the original four-inference model proposed by Kane - frequently used in HPE validation research - has its limitations in terms of what each inference entails and what claims and sources of backing are housed in each inference. The under-specification in the four-inference model has led to inconsistent practices in HPE validation research, posing challenges for (i) researchers who want to evaluate the validity of different HPE assessment tools and/or (ii) researchers who are new to test validation and need to establish a coherent understanding of argument-based validation.</p><p><strong>Methods: </strong>To address these identified concerns, this article introduces the expanded seven-inference argument-based validation framework that is established practice in the field of language testing and assessment (LTA). We explicate (i) why LTA researchers experienced the need to further specify the original four Kanean inferences; (ii) how LTA validation research defines each of their seven inferences and (iii) what claims, assumptions and sources of backing are associated with each inference. Sampling six representative validation studies in HPE, we demonstrate why an expanded model and a shared disciplinary validation framework can facilitate the examination of the validity evidence in diverse HPE validation contexts.</p><p><strong>Conclusions: </strong>We invite HPE validation researchers to experiment with the seven-inference argument-based framework from LTA to evaluate its usefulness to HPE. We also call for greater interdisciplinary dialogue between HPE and LTA since both disciplines share many fundamental concerns about language use, communication skills, assessment practices and validity in assessment instruments.</p>","PeriodicalId":18370,"journal":{"name":"Medical Education","volume":null,"pages":null},"PeriodicalIF":4.9000,"publicationDate":"2024-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Medical Education","FirstCategoryId":"95","ListUrlMain":"https://doi.org/10.1111/medu.15452","RegionNum":1,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"EDUCATION, SCIENTIFIC DISCIPLINES","Score":null,"Total":0}
引用次数: 0

Abstract

Context: One central consideration in health professions education (HPE) is to ensure we are making sound and justifiable decisions based on the assessment instruments we use on health professionals. To achieve this goal, HPE assessment researchers have drawn on Kane's argument-based framework to ascertain the validity of their assessment tools. However, the original four-inference model proposed by Kane - frequently used in HPE validation research - has its limitations in terms of what each inference entails and what claims and sources of backing are housed in each inference. The under-specification in the four-inference model has led to inconsistent practices in HPE validation research, posing challenges for (i) researchers who want to evaluate the validity of different HPE assessment tools and/or (ii) researchers who are new to test validation and need to establish a coherent understanding of argument-based validation.

Methods: To address these identified concerns, this article introduces the expanded seven-inference argument-based validation framework that is established practice in the field of language testing and assessment (LTA). We explicate (i) why LTA researchers experienced the need to further specify the original four Kanean inferences; (ii) how LTA validation research defines each of their seven inferences and (iii) what claims, assumptions and sources of backing are associated with each inference. Sampling six representative validation studies in HPE, we demonstrate why an expanded model and a shared disciplinary validation framework can facilitate the examination of the validity evidence in diverse HPE validation contexts.

Conclusions: We invite HPE validation researchers to experiment with the seven-inference argument-based framework from LTA to evaluate its usefulness to HPE. We also call for greater interdisciplinary dialogue between HPE and LTA since both disciplines share many fundamental concerns about language use, communication skills, assessment practices and validity in assessment instruments.

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
扩展凯恩基于论证的有效性框架:语言评估中的验证实践能为健康专业教育提供什么?
背景:卫生专业教育(HPE)的一个核心考虑因素是确保我们在对卫生专业人员使用评估工具的基础上做出合理、正当的决定。为了实现这一目标,卫生专业教育评估研究人员借鉴了凯恩的论证框架,以确定其评估工具的有效性。然而,凯恩最初提出的四推论模型--在 HPE 验证研究中经常使用--在每个推论的含义以及每个推论的主张和支持来源方面有其局限性。四推理模型的规范性不足导致了 HPE 验证研究中的实践不一致,给(i)想要评估不同 HPE 评估工具有效性的研究人员和/或(ii)刚刚接触测试验证并需要建立对基于论证的验证的一致理解的研究人员带来了挑战:为了解决这些问题,本文介绍了语言测试与评估(LTA)领域的既定实践--扩展的七推理论证式验证框架。我们解释了:(i) 为什么 LTA 研究人员认为有必要进一步明确 Kanean 最初的四个推论;(ii) LTA 验证研究如何定义七个推论中的每一个;(iii) 每个推论的相关主张、假设和支持来源。我们选取了六项具有代表性的 HPE 验证研究,说明为什么一个扩展的模型和一个共享的学科验证框架可以促进在不同的 HPE 验证环境中对有效性证据的审查:我们邀请 HPE 验证研究人员尝试基于 LTA 的七推理论证框架,以评估其对 HPE 的实用性。我们还呼吁加强 HPE 和 LTA 之间的跨学科对话,因为这两个学科都对语言使用、交流技能、评估实践和评估工具的有效性有着许多共同的关注。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
Medical Education
Medical Education 医学-卫生保健
CiteScore
8.40
自引率
10.00%
发文量
279
审稿时长
4-8 weeks
期刊介绍: Medical Education seeks to be the pre-eminent journal in the field of education for health care professionals, and publishes material of the highest quality, reflecting world wide or provocative issues and perspectives. The journal welcomes high quality papers on all aspects of health professional education including; -undergraduate education -postgraduate training -continuing professional development -interprofessional education
期刊最新文献
Why we should stop writing commentaries about AI. Employing reflective practice to enhance student engagement. A scoping review and theory-informed conceptual model of professional identity formation in medical education: Commentary from a clinical psychology perspective. Laying train tracks en route: How institutional education leaders navigate complexity during mandated curriculum change. Feedback, learning and becoming: Narratives of feedback in complex performance challenges.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1