The best of both worlds: Assessing trainee progression in the era of competency based medical education

IF 4.9 1区 教育学 Q1 EDUCATION, SCIENTIFIC DISCIPLINES Medical Education Pub Date : 2024-04-10 DOI:10.1111/medu.15390
Stephen Gauthier, Rose Hatala
{"title":"The best of both worlds: Assessing trainee progression in the era of competency based medical education","authors":"Stephen Gauthier,&nbsp;Rose Hatala","doi":"10.1111/medu.15390","DOIUrl":null,"url":null,"abstract":"<p>As clinical educators working in the Canadian postgraduate medical education landscape, we are often asked ‘why competency-based medical education (CBME)’? CBME promises clearer training outcomes with a more explicit assessment of these outcomes.<span><sup>1</sup></span> Ideally, this system allows for individualised attention to residents where areas to work on are readily identified. Summative decisions are made by groups (e.g. clinical competence committees [CCCs]) that decide on the entrustment and promotion of individual residents based on assessments of their performance in professional activities (e.g. entrustable professional activities [EPAs] or milestones).<span><sup>1</sup></span></p><p>In CBME, programs are attempting to implement prospective entrustment decisions while moving away from the systems of presumptive trust that were a hallmark of pre-CBME, time-based training models.<span><sup>1</sup></span> Operationalising this in a meaningful way has been fraught with difficulty. While there were problems with an over-reliance on presumptive trust and an under-reliance on objective assessment of competence, we are concerned that the current CBME implementation has swung too far towards heavily relying on assessment tools and processes that lack validity evidence to support the entrustment and promotion decisions that CCCs are trying to make.</p><p>In North America, the implementation of CBME has meant that almost every professional activity (or milestone) deemed important has been tightly tied to the completion of directly observed workplace-based assessments (WBAs) of that activity. So tightly have EPAs been bound to these WBAs that residents and educators alike use the terms interchangeably (i.e. ‘Send me an EPA’ means ‘let us complete a WBA’<span><sup>2</sup></span>). Unfortunately, this overloads supervisors and residents with assessment quotas and overwhelms CCCs with assessment data, some meaningful, some not.<span><sup>3</sup></span></p><p>Furthermore, there has been an over-emphasis on one very narrow conceptualisation of WBA as an entrustment-based tool meant to assess single encounters without considering if it is the right tool for the job. Several assessment tools exist that can be applied in the workplace (longitudinal WBA, indirect observation, multi-source feedback, etc.). For some activities, assessment outside the workplace (simulation, objective structured clinical examination [OSCE], etc.) might provide more useful information to CCCs.</p><p>Over-reliance on entrustment-based WBA, or over-reliance on WBA itself, is based on the dangerous assumption that an assessment tool with supportive validity evidence in one context is transferrable to other contexts. In this issue, Ryan et al. show how unreliable WBA can be when a single WBA tool is deployed across different contexts.<span><sup>4</sup></span></p><p>To combat the over-reliance on WBA, we argue for locally developed programmatic assessment.<span><sup>5</sup></span> Individual programs and specialties need the autonomy to develop their own programs of assessment supported by validity evidence derived within their own contexts. While WBAs may be used to assess certain activities, assessments of other activities could rely on other assessment methods. Not every professional activity worth assessing needs a specific number of narrowly conceptualised WBAs for thoughtful CCCs to make defensible decisions about entrustment and promotion.</p><p>This brings us to the core question of any system of assessment, including CBME: What decisions about our residents are we trying to make?<span><sup>6</sup></span> To identify residents in difficulty, do we have assessment tools with supportive validity evidence to identify these residents and the areas for improvement? To increase the feedback provided to residents, do we need high-volume WBA, and does WBA achieve this goal? To decide if a resident can be trusted to take on additional clinical responsibility, can the assessment system provide a reliable and holistic view of the resident's competence?</p><p>One path forward from this over-reliance on WBA is to recognise the value of presumptive trust (which grew out of years of experience with resident training and systems of practice) while leveraging the strengths of CBME in terms of clear and relevant training expectations and outcomes. A system where residents are given presumptive trust during certain stages of training with thoughtfully deployed assessments at key developmental moments would reduce the resource requirements of implementing high numbers of WBAs. Doing so in a way that works for all programs and specialties necessitates developing programmatic assessment situated in the local context. In this model, we could formalise and embrace the use of presumptive trust while adding more assessment than in the past, pausing at key moments of resident development and looking for red flags as a signal that the presumptive trust of an individual resident is not acceptable. Key to this approach would be to balance routine progression through training, grounded in a degree of presumptive trust, with a locally developed program of assessment that supports the CCC's decisions.</p><p>Using the analogy discussed in Schumacher et al.'s paper in this issue,<span><sup>7</sup></span> while it is prudent to stop the conveyor belt of training to make prospective entrustment decisions, the conveyor belt need not be stopped for every single professional activity for every resident. Thoughtfully implementing a system that incorporates presumptive trust ensures that CCCs are not overwhelmed with a conveyor belt that is constantly turning on and off and instead could effectively focus on stopping the conveyor at key decision points. As Schumacher et al.'s study highlights, this model is in part what is currently happening on the ground.<span><sup>7</sup></span></p><p>In such a model, we need local programs of assessment that are fit for purpose. Programs would start by asking themselves what problem they are addressing and what decisions they are making. Next, ask what the best data is to make these decisions and which tools will best capture that data. This approach allows a CCC to consider their own local factors, like the program's size, faculty interaction with assessment tools and how various assessment tools have worked for them in the past. Being clear about the link between the individual tools, the validity evidence supporting the program of assessment in the local context and how the CCC uses that data is key.</p><p>More data does not necessarily mean more defensible decisions. The limited time and resources available to training programs cannot be wasted on obtaining unhelpful assessment data that does not support the decisions they are trying to make. Reducing the use of WBA where it is not fit for purpose and developing locally sustainable and defensible programs of assessment are steps towards unlocking the value of CBME.</p><p><b>Stephen Gauthier</b>: Conceptualization (equal); writing—original draft (lead); writing—review and editing (equal). <b>Rose Hatala</b>: Conceptualization (equal); writing—original draft (supporting); writing—review and editing (equal).</p>","PeriodicalId":18370,"journal":{"name":"Medical Education","volume":null,"pages":null},"PeriodicalIF":4.9000,"publicationDate":"2024-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/medu.15390","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Medical Education","FirstCategoryId":"95","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1111/medu.15390","RegionNum":1,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"EDUCATION, SCIENTIFIC DISCIPLINES","Score":null,"Total":0}
引用次数: 0

Abstract

As clinical educators working in the Canadian postgraduate medical education landscape, we are often asked ‘why competency-based medical education (CBME)’? CBME promises clearer training outcomes with a more explicit assessment of these outcomes.1 Ideally, this system allows for individualised attention to residents where areas to work on are readily identified. Summative decisions are made by groups (e.g. clinical competence committees [CCCs]) that decide on the entrustment and promotion of individual residents based on assessments of their performance in professional activities (e.g. entrustable professional activities [EPAs] or milestones).1

In CBME, programs are attempting to implement prospective entrustment decisions while moving away from the systems of presumptive trust that were a hallmark of pre-CBME, time-based training models.1 Operationalising this in a meaningful way has been fraught with difficulty. While there were problems with an over-reliance on presumptive trust and an under-reliance on objective assessment of competence, we are concerned that the current CBME implementation has swung too far towards heavily relying on assessment tools and processes that lack validity evidence to support the entrustment and promotion decisions that CCCs are trying to make.

In North America, the implementation of CBME has meant that almost every professional activity (or milestone) deemed important has been tightly tied to the completion of directly observed workplace-based assessments (WBAs) of that activity. So tightly have EPAs been bound to these WBAs that residents and educators alike use the terms interchangeably (i.e. ‘Send me an EPA’ means ‘let us complete a WBA’2). Unfortunately, this overloads supervisors and residents with assessment quotas and overwhelms CCCs with assessment data, some meaningful, some not.3

Furthermore, there has been an over-emphasis on one very narrow conceptualisation of WBA as an entrustment-based tool meant to assess single encounters without considering if it is the right tool for the job. Several assessment tools exist that can be applied in the workplace (longitudinal WBA, indirect observation, multi-source feedback, etc.). For some activities, assessment outside the workplace (simulation, objective structured clinical examination [OSCE], etc.) might provide more useful information to CCCs.

Over-reliance on entrustment-based WBA, or over-reliance on WBA itself, is based on the dangerous assumption that an assessment tool with supportive validity evidence in one context is transferrable to other contexts. In this issue, Ryan et al. show how unreliable WBA can be when a single WBA tool is deployed across different contexts.4

To combat the over-reliance on WBA, we argue for locally developed programmatic assessment.5 Individual programs and specialties need the autonomy to develop their own programs of assessment supported by validity evidence derived within their own contexts. While WBAs may be used to assess certain activities, assessments of other activities could rely on other assessment methods. Not every professional activity worth assessing needs a specific number of narrowly conceptualised WBAs for thoughtful CCCs to make defensible decisions about entrustment and promotion.

This brings us to the core question of any system of assessment, including CBME: What decisions about our residents are we trying to make?6 To identify residents in difficulty, do we have assessment tools with supportive validity evidence to identify these residents and the areas for improvement? To increase the feedback provided to residents, do we need high-volume WBA, and does WBA achieve this goal? To decide if a resident can be trusted to take on additional clinical responsibility, can the assessment system provide a reliable and holistic view of the resident's competence?

One path forward from this over-reliance on WBA is to recognise the value of presumptive trust (which grew out of years of experience with resident training and systems of practice) while leveraging the strengths of CBME in terms of clear and relevant training expectations and outcomes. A system where residents are given presumptive trust during certain stages of training with thoughtfully deployed assessments at key developmental moments would reduce the resource requirements of implementing high numbers of WBAs. Doing so in a way that works for all programs and specialties necessitates developing programmatic assessment situated in the local context. In this model, we could formalise and embrace the use of presumptive trust while adding more assessment than in the past, pausing at key moments of resident development and looking for red flags as a signal that the presumptive trust of an individual resident is not acceptable. Key to this approach would be to balance routine progression through training, grounded in a degree of presumptive trust, with a locally developed program of assessment that supports the CCC's decisions.

Using the analogy discussed in Schumacher et al.'s paper in this issue,7 while it is prudent to stop the conveyor belt of training to make prospective entrustment decisions, the conveyor belt need not be stopped for every single professional activity for every resident. Thoughtfully implementing a system that incorporates presumptive trust ensures that CCCs are not overwhelmed with a conveyor belt that is constantly turning on and off and instead could effectively focus on stopping the conveyor at key decision points. As Schumacher et al.'s study highlights, this model is in part what is currently happening on the ground.7

In such a model, we need local programs of assessment that are fit for purpose. Programs would start by asking themselves what problem they are addressing and what decisions they are making. Next, ask what the best data is to make these decisions and which tools will best capture that data. This approach allows a CCC to consider their own local factors, like the program's size, faculty interaction with assessment tools and how various assessment tools have worked for them in the past. Being clear about the link between the individual tools, the validity evidence supporting the program of assessment in the local context and how the CCC uses that data is key.

More data does not necessarily mean more defensible decisions. The limited time and resources available to training programs cannot be wasted on obtaining unhelpful assessment data that does not support the decisions they are trying to make. Reducing the use of WBA where it is not fit for purpose and developing locally sustainable and defensible programs of assessment are steps towards unlocking the value of CBME.

Stephen Gauthier: Conceptualization (equal); writing—original draft (lead); writing—review and editing (equal). Rose Hatala: Conceptualization (equal); writing—original draft (supporting); writing—review and editing (equal).

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
两全其美:在以能力为基础的医学教育时代评估受训人员的进展情况
作为在加拿大研究生医学教育领域工作的临床教育工作者,我们经常被问到 "为什么要开展基于能力的医学教育(CBME)"?能力本位医学教育承诺提供更清晰的培训成果,并对这些成果进行更明确的评估。1 理想的情况是,这一系统能够对住院医师进行个性化的关注,使其能够随时确定需要努力的方向。1 在 CBME 中,项目试图实施前瞻性的委托决策,同时摒弃推定信任体系,而推定信任体系是前 CBME 以时间为基础的培训模式的标志。虽然存在过度依赖推定信任和对能力客观评估依赖不足的问题,但我们担心的是,目前CBME的实施已经过度依赖评估工具和程序,而这些工具和程序缺乏有效的证据来支持CCC试图做出的委托和晋升决定。在北美,CBME 的实施意味着几乎每一项被认为重要的专业活动(或里程碑)都与完成直接观察的基于工作场所的评估(WBAs)紧密联系在一起。EPA 与 WBA 的关系如此紧密,以至于住院医师和教育者都会交替使用这两个术语(即 "给我寄一份 EPA "意味着 "让我们完成 WBA "2)。不幸的是,这使得督导人员和住院医师的评估任务过重,评估数据(有些是有意义的,有些是没有意义的)也使社区协调中心不堪重负。3 此外,人们过分强调 WBA 的一个非常狭隘的概念,即它是一种以委托为基础的工具,旨在评估单次接触,而没有考虑它是否是适合这项工作的工具。有几种评估工具可用于工作场所(纵向工作场所评估、间接观察、多源反馈等)。对某些活动而言,工作场所以外的评估(模拟、客观结构化临床考试 [OSCE] 等)可能会为 CCC 提供更有用的信息。过度依赖基于委托的 WBA 或过度依赖 WBA 本身,都是基于一种危险的假设,即在一种情况下具有支持性有效性证据的评估工具可以转移到其他情况下。在本期杂志中,Ryan 等人展示了在不同情况下使用单一的 WBA 工具时,WBA 是多么的不可靠。4 为了消除对 WBA 的过度依赖,我们主张采用本地开发的项目评估。虽然可以使用 WBA 评估某些活动,但对其他活动的评估可以依靠其他评估方法。并不是每项值得评估的专业活动都需要特定数量的狭义概念化 WBA,才能让深思熟虑的 CCC 在委托和晋升方面做出站得住脚的决定。这就引出了包括 CBME 在内的任何评估系统的核心问题:我们试图对住院医师做出什么样的决定?6 为了识别有困难的住院医师,我们是否拥有具有支持性有效性证据的评估工具来识别这些住院医师以及需要改进的方面?为了增加向住院患者提供的反馈,我们是否需要大量的 WBA?要决定是否可以信任一名住院医师,让其承担更多的临床责任,评估系统是否能够提供可靠的、全面的住院医师能力评估?在住院医师培训的某些阶段,给予住院医师推定信任,并在关键的发展时刻进行深思熟虑的评估,这将减少实施大量 WBA 所需的资源。要做到适用于所有项目和专科,就必须因地制宜地制定项目评估。 在这种模式下,我们可以将推定信任正式化,并接受推定信任的使用,同时比过去增加更多的评估,在住院医师发展的关键时刻暂停,并寻找红旗作为信号,表明对个别住院医师的推定信任是不可接受的。这种方法的关键在于,在一定程度的推定信任基础上,平衡常规培训进度与当地制定的评估计划,以支持 CCC 的决策。使用本期 Schumacher 等人的论文7 中讨论的比喻,虽然停止培训传送带以做出预期委托决策是谨慎之举,但不必为每一位住院医师的每一项专业活动都停止传送带。深思熟虑地实施一个包含推定信任的系统,可以确保住院医师培训中心不会被不断开启和关闭的传送带压得喘不过气来,而是可以有效地集中精力在关键决策点停止传送带。正如舒马赫(Schumacher)等人的研究报告所强调的那样,这种模式在一定程度上就是目前当地正在发生的事情。7 在这种模式下,我们需要适合目的的地方评估计划。7 在这种模式中,我们需要适合目的的地方评估计划。计划首先要问自己,他们正在解决什么问题,正在做出什么决策。接下来,要问什么是做出这些决策的最佳数据,以及哪些工具能最好地捕捉这些数据。这种方法允许 CCC 考虑自己的本地因素,如计划的规模、教师与评估工具的互动以及各种评估工具在过去是如何为他们工作的。明确各个工具之间的联系、支持当地评估计划的有效性证据以及 CCC 如何使用这些数据是关键。培训计划可用的有限时间和资源不能浪费在获取无益的评估数据上,因为这些数据无法支持他们试图做出的决策。在不符合目的的情况下,减少使用 WBA,并制定当地可持续的、可辩护的评估计划,是释放 CBME 价值的步骤。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
Medical Education
Medical Education 医学-卫生保健
CiteScore
8.40
自引率
10.00%
发文量
279
审稿时长
4-8 weeks
期刊介绍: Medical Education seeks to be the pre-eminent journal in the field of education for health care professionals, and publishes material of the highest quality, reflecting world wide or provocative issues and perspectives. The journal welcomes high quality papers on all aspects of health professional education including; -undergraduate education -postgraduate training -continuing professional development -interprofessional education
期刊最新文献
A realist evaluation of prospective entrustment decisions in paediatric residency clinical competency committees. Putting 'leader' back into leadership training. Supporting resident inbox management with screen-casted videos. Enhancing telehealth Objective Structured Clinical Examination fidelity with integrated Electronic Health Record simulation. Equity, diversity, and inclusion in entrustable professional activities based assessment.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1