Whose voices are heard in health professions education validity arguments?

IF 4.9 1区 教育学 Q1 EDUCATION, SCIENTIFIC DISCIPLINES Medical Education Pub Date : 2024-09-10 DOI:10.1111/medu.15528
Georgina C. Stephens, Gabrielle Brand, Sharon Yahalom
{"title":"Whose voices are heard in health professions education validity arguments?","authors":"Georgina C. Stephens,&nbsp;Gabrielle Brand,&nbsp;Sharon Yahalom","doi":"10.1111/medu.15528","DOIUrl":null,"url":null,"abstract":"<p>The construction of validity arguments for assessments in health professions education (HPE) has been likened to a lawyer preparing for and presenting a case.<span><sup>1</sup></span> Like a lawyer curates a brief of evidence with the aim of convincing a judge or jury to make a particular decision about their client, so are health professions educators required to provide validity evidence that supports a defensible decision about a student being assessed.<span><sup>1</sup></span> Kane's argument-based validity framework,<span><sup>2</sup></span> now expanded by scholars in language testing and assessment (LTA), addresses challenges of prior conceptualisations of validity by providing a staged approach to building a validation argument according to ‘inferences’. Whereas Kane's original four inference model commences with scoring,<span><sup>2</sup></span> the expanded seven inference model as described in this issue by Dai et al.<span><sup>3</sup></span> in LTA commences with a domain description.</p><p>The goal of the domain description is to ensure that the ‘selection, design and delivery of the test tasks takes the relevant target domain into account’.<span><sup>3</sup></span> Described sources of backing for this inference include interviews or surveys of domain insiders. Starting with a domain description should provide a solid foundation for subsequent inferences made about the assessment but also begs the question of who are considered domain insiders. And whether insights from diverse groups with insider perspectives can together build a more robust and nuanced validity argument. Returning to the analogy of the lawyer's decision-making processes, there may be multiple witnesses with evidence to share, but which witnesses are called upon to provide evidence in court? Or alternatively, which witnesses are not selected out of concern that their differing perspectives may threaten the lawyer's plan for the case?</p><p>Domain insiders could be considered ‘expert witnesses’, that is, those with subject matter expertise typically built through education and professional experience, such as health professions educators with clinical and/or pedagogical expertise. While subject matter expertise is important to understanding whether assessment tasks sufficiently reflect the domain being assessed, potential differences between expert and novice (i.e., student) understandings of a domain could disrupt a validity argument. Consider assessments of uncertainty tolerance (UT): Commonly used UT scales intended to measure UT in healthcare contexts engaged expertise during scale development in the form of interviews with health professionals, reviews of construct literature and consultation with medical educator peers.<span><sup>4</sup></span> One UT scale has been used by the Association of American Medical Colleges as part of routine matriculation and graduation surveys of medical students, with the intent that the results inform medical school programmatic evaluation.<span><sup>4</sup></span> However, the perspectives of medical students were not included as part of the validity argument for this scale.<span><sup>4</sup></span></p><p>Research on the UT construct more broadly highlights potential differences between expert and student conceptions of uncertainty, with students' conceptions of uncertainty focussed on individual knowledge gaps rather than uncertainty inherent within patient care more typical of clinical experts.<span><sup>5</sup></span> By not including medical student perspectives as part of the validity argument, the meaning of scale results could potentially be misinterpreted. For example, an increase in UT scores from matriculation to graduation could represent that knowledge gaps have been filled and perceived uncertainty reduced, and not that students are better at ‘tolerating’ or managing uncertainty, an essential graduate skill needed to manage the dynamic, complex and unpredictable nature of healthcare. Hence, including test takers as domain insiders or ‘witnesses’ early in constructing a validity argument may enable those developing an assessment to proactively identify early threats to their argument before evaluating subsequent inferences in the framework.</p><p>An example of an assessment where diverse ‘witnesses’ enabled inferences to be made about the domain description, and where the field of LTA and HPE intersect, is the Occupational English Test (OET).<span><sup>6</sup></span> The OET is an English-language test designed for health professionals originally developed by LTA experts and includes writing a letter of referral to assess written clinical communication.<span><sup>7</sup></span> While updating and modifying the assessment criteria for the written component of the test, applied linguists, LTA experts and HPE researchers collaborated with health professionals (domain insiders) through an interview and focus group study with clinicians and other stakeholders to identify criteria indigenous to healthcare settings.<span><sup>6, 7</sup></span></p><p>The differing expertise of members of the research team enabled them to make recommendations about modifications to the assessment criteria based on the data.<span><sup>6</sup></span> For instance, the health professionals were vital for ensuring the establishment of professionally relevant criteria, whereas the LTA experts were crucial for defining levels of performance.<span><sup>6</sup></span> When analysing data, coding schemes were initially developed by the LTA experts and then refined through collaboration with the health professionals. The researchers in this study were transparent about how collaboration posed some challenges; however, they concluded that it ultimately enabled a greater depth and understanding of the complexity of the domain being assessed, and awareness of this complexity would not have been achieved without collaboration.<span><sup>6</sup></span> Such insights identified through the process of domain description for the OET could in turn be used to enhance HPE (e.g., workshops on clinical note writing for final-year health professions students).</p><p>Collaborative approaches to understanding the perspectives of domain insiders may lead to richer and more robust validity arguments and assessments when compared with those centred on ‘expert witnesses’. Just as neglecting to call a key witness to the stand would be considered an unacceptable omission by a lawyer, the field of HPE should more broadly consider whose voices are valued and included in validity arguments so that important insights into domains are not overlooked. This can be achieved by inviting testimony from diverse stakeholders, including health professions students and patients (healthcare consumers), on what is important to and valued by them so that HPE aligns with community needs, values and expectations of healthcare. The cross-cutting edge paper by Dai et al.<span><sup>3</sup></span> provides a timely reminder of how interdisciplinary dialogue (in this case, between HPE and LTA) can expand the epistemological landscape in HPE and research to generate new ways of thinking about validation practice in HPE.<span><sup>8</sup></span> HPE in general is moving towards transformational approaches that integrate student and patient perspectives, as exemplified by the growing use of co-design methodology.<span><sup>9, 10</sup></span> Co-design is underpinned by principles of inclusive, respectful, participative, iterative and outcomes-focussed consumer engagement.<span><sup>9</sup></span> Although the use of co-design methodology in assessment is currently limited,<span><sup>10</sup></span> co-design principles may be well suited to navigating how diverse assessment stakeholder perspectives can be explored and successfully integrated within a domain description and ultimately inform the development of curricula and assessments.</p><p>Returning to the opening analogy likening validity arguments to a lawyer constructing and presenting a case, it may be helpful for health professions researchers and educators to think more broadly about who is (or is not) given the opportunity to contribute to building validity evidence. For HPE, this may involve including ‘witnesses’ with diverse lived and learned expertise (e.g., students, patients and practicing clinicians) in collaboration with researchers with diverse backgrounds (e.g., interdisciplinary collaboration between HPE and LTA researchers). By broadly considering who is a domain insider, health professions educators and researchers may facilitate a more nuanced and holistic domain description, more robust arguments for the interpretation of test scores and ultimately the creation of richer and more relevant assessments with clear benefits for health professions educators, researchers, students and the wider community.</p><p><b>Georgina C. Stephens:</b> Conceptualization; writing—original draft. <b>Gabrielle Brand:</b> Conceptualization; writing—original draft. <b>Sharon Yahalom:</b> Conceptualization; writing—original draft.</p>","PeriodicalId":18370,"journal":{"name":"Medical Education","volume":"58 12","pages":"1429-1432"},"PeriodicalIF":4.9000,"publicationDate":"2024-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/medu.15528","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Medical Education","FirstCategoryId":"95","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1111/medu.15528","RegionNum":1,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"EDUCATION, SCIENTIFIC DISCIPLINES","Score":null,"Total":0}
引用次数: 0

Abstract

The construction of validity arguments for assessments in health professions education (HPE) has been likened to a lawyer preparing for and presenting a case.1 Like a lawyer curates a brief of evidence with the aim of convincing a judge or jury to make a particular decision about their client, so are health professions educators required to provide validity evidence that supports a defensible decision about a student being assessed.1 Kane's argument-based validity framework,2 now expanded by scholars in language testing and assessment (LTA), addresses challenges of prior conceptualisations of validity by providing a staged approach to building a validation argument according to ‘inferences’. Whereas Kane's original four inference model commences with scoring,2 the expanded seven inference model as described in this issue by Dai et al.3 in LTA commences with a domain description.

The goal of the domain description is to ensure that the ‘selection, design and delivery of the test tasks takes the relevant target domain into account’.3 Described sources of backing for this inference include interviews or surveys of domain insiders. Starting with a domain description should provide a solid foundation for subsequent inferences made about the assessment but also begs the question of who are considered domain insiders. And whether insights from diverse groups with insider perspectives can together build a more robust and nuanced validity argument. Returning to the analogy of the lawyer's decision-making processes, there may be multiple witnesses with evidence to share, but which witnesses are called upon to provide evidence in court? Or alternatively, which witnesses are not selected out of concern that their differing perspectives may threaten the lawyer's plan for the case?

Domain insiders could be considered ‘expert witnesses’, that is, those with subject matter expertise typically built through education and professional experience, such as health professions educators with clinical and/or pedagogical expertise. While subject matter expertise is important to understanding whether assessment tasks sufficiently reflect the domain being assessed, potential differences between expert and novice (i.e., student) understandings of a domain could disrupt a validity argument. Consider assessments of uncertainty tolerance (UT): Commonly used UT scales intended to measure UT in healthcare contexts engaged expertise during scale development in the form of interviews with health professionals, reviews of construct literature and consultation with medical educator peers.4 One UT scale has been used by the Association of American Medical Colleges as part of routine matriculation and graduation surveys of medical students, with the intent that the results inform medical school programmatic evaluation.4 However, the perspectives of medical students were not included as part of the validity argument for this scale.4

Research on the UT construct more broadly highlights potential differences between expert and student conceptions of uncertainty, with students' conceptions of uncertainty focussed on individual knowledge gaps rather than uncertainty inherent within patient care more typical of clinical experts.5 By not including medical student perspectives as part of the validity argument, the meaning of scale results could potentially be misinterpreted. For example, an increase in UT scores from matriculation to graduation could represent that knowledge gaps have been filled and perceived uncertainty reduced, and not that students are better at ‘tolerating’ or managing uncertainty, an essential graduate skill needed to manage the dynamic, complex and unpredictable nature of healthcare. Hence, including test takers as domain insiders or ‘witnesses’ early in constructing a validity argument may enable those developing an assessment to proactively identify early threats to their argument before evaluating subsequent inferences in the framework.

An example of an assessment where diverse ‘witnesses’ enabled inferences to be made about the domain description, and where the field of LTA and HPE intersect, is the Occupational English Test (OET).6 The OET is an English-language test designed for health professionals originally developed by LTA experts and includes writing a letter of referral to assess written clinical communication.7 While updating and modifying the assessment criteria for the written component of the test, applied linguists, LTA experts and HPE researchers collaborated with health professionals (domain insiders) through an interview and focus group study with clinicians and other stakeholders to identify criteria indigenous to healthcare settings.6, 7

The differing expertise of members of the research team enabled them to make recommendations about modifications to the assessment criteria based on the data.6 For instance, the health professionals were vital for ensuring the establishment of professionally relevant criteria, whereas the LTA experts were crucial for defining levels of performance.6 When analysing data, coding schemes were initially developed by the LTA experts and then refined through collaboration with the health professionals. The researchers in this study were transparent about how collaboration posed some challenges; however, they concluded that it ultimately enabled a greater depth and understanding of the complexity of the domain being assessed, and awareness of this complexity would not have been achieved without collaboration.6 Such insights identified through the process of domain description for the OET could in turn be used to enhance HPE (e.g., workshops on clinical note writing for final-year health professions students).

Collaborative approaches to understanding the perspectives of domain insiders may lead to richer and more robust validity arguments and assessments when compared with those centred on ‘expert witnesses’. Just as neglecting to call a key witness to the stand would be considered an unacceptable omission by a lawyer, the field of HPE should more broadly consider whose voices are valued and included in validity arguments so that important insights into domains are not overlooked. This can be achieved by inviting testimony from diverse stakeholders, including health professions students and patients (healthcare consumers), on what is important to and valued by them so that HPE aligns with community needs, values and expectations of healthcare. The cross-cutting edge paper by Dai et al.3 provides a timely reminder of how interdisciplinary dialogue (in this case, between HPE and LTA) can expand the epistemological landscape in HPE and research to generate new ways of thinking about validation practice in HPE.8 HPE in general is moving towards transformational approaches that integrate student and patient perspectives, as exemplified by the growing use of co-design methodology.9, 10 Co-design is underpinned by principles of inclusive, respectful, participative, iterative and outcomes-focussed consumer engagement.9 Although the use of co-design methodology in assessment is currently limited,10 co-design principles may be well suited to navigating how diverse assessment stakeholder perspectives can be explored and successfully integrated within a domain description and ultimately inform the development of curricula and assessments.

Returning to the opening analogy likening validity arguments to a lawyer constructing and presenting a case, it may be helpful for health professions researchers and educators to think more broadly about who is (or is not) given the opportunity to contribute to building validity evidence. For HPE, this may involve including ‘witnesses’ with diverse lived and learned expertise (e.g., students, patients and practicing clinicians) in collaboration with researchers with diverse backgrounds (e.g., interdisciplinary collaboration between HPE and LTA researchers). By broadly considering who is a domain insider, health professions educators and researchers may facilitate a more nuanced and holistic domain description, more robust arguments for the interpretation of test scores and ultimately the creation of richer and more relevant assessments with clear benefits for health professions educators, researchers, students and the wider community.

Georgina C. Stephens: Conceptualization; writing—original draft. Gabrielle Brand: Conceptualization; writing—original draft. Sharon Yahalom: Conceptualization; writing—original draft.

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
在卫生专业教育有效性的争论中,谁的声音被听到?
6 例如,医疗专业人员对于确保制定与专业相关的标准至关重要,而长期协议专家对于确定绩效水平至关重要。6 在分析数据时,编码方案最初由长期协议专家制定,然后通过与医疗专业人员合作加以完善。本研究的研究人员坦言合作带来了一些挑战,但他们得出结论,合作最终使他们对所评估领域的复杂性有了更深入的了解和认识,而如果没有合作,他们是不可能意识到这种复杂性的。6 通过 OET 领域描述过程中发现的这些见解反过来也可用于提高 HPE(例如,为卫生专业毕业班学生举办的临床笔记写作研讨会)。正如忽视传唤关键证人出庭会被律师视为不可接受的疏忽一样,HPE 领域应更广泛地考虑重视谁的声音并将其纳入有效性论证,从而避免忽视对领域的重要见解。要做到这一点,可以邀请不同的利益相关者(包括健康专业学生和患者(医疗保健消费者))提供证词,说明什么对他们来说是重要的和有价值的,从而使 HPE 符合社区需求、价值观和对医疗保健的期望。Dai 等人3 的跨领域论文及时提醒我们,跨学科对话(在本例中,即 HPE 和 LTA 之间的对话)如何能够扩展 HPE 和研究的认识论视野,从而产生 HPE 验证实践的新思路。尽管共同设计方法在评估中的应用目前还很有限,10 但共同设计原则可能非常适合于指导如何探索不同评估利益相关者的观点,并将其成功整合到领域描述中,最终为课程和评估的开发提供信息。回到开头的比喻,将有效性论证比作律师构建和陈述案例,这可能有助于卫生专业研究人员和教育工作者更广泛地思考谁(或不)有机会为构建有效性证据做出贡献。对于 HPE 而言,这可能涉及在与具有不同背景的研究人员(如 HPE 和 LTA 研究人员之间的跨学科合作)合作时,纳入具有不同生活和学习专长的 "证人"(如学生、患者和执业临床医生)。通过广泛考虑谁是领域内部人士,卫生专业教育工作者和研究人员可以促进更细致、更全面的领域描述,为解释测试分数提供更有力的论据,并最终创建更丰富、更相关的评估,为卫生专业教育工作者、研究人员、学生和更广泛的社区带来明显的益处:构思;写作-原稿。加布里埃尔-布兰德构思;撰写-原稿。莎伦-亚哈洛姆构思;撰写-原稿。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
Medical Education
Medical Education 医学-卫生保健
CiteScore
8.40
自引率
10.00%
发文量
279
审稿时长
4-8 weeks
期刊介绍: Medical Education seeks to be the pre-eminent journal in the field of education for health care professionals, and publishes material of the highest quality, reflecting world wide or provocative issues and perspectives. The journal welcomes high quality papers on all aspects of health professional education including; -undergraduate education -postgraduate training -continuing professional development -interprofessional education
期刊最新文献
The need for critical and intersectional approaches to equity efforts in postgraduate medical education: A critical narrative review. When I say … neurodiversity paradigm. The transition to clerkshIps bootcamp: Innovative and flexible curriculum strategies post COVID-19 adaptation. Issue Information Empowering dental students' collaborative learning using peer assessment.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1