SHARP (SHort Answer, Rationale Provision): A New Item Format to Assess Clinical Reasoning.

IF 5.3 2区 教育学 Q1 EDUCATION, SCIENTIFIC DISCIPLINES Academic Medicine Pub Date : 2024-09-01 Epub Date: 2024-05-15 DOI:10.1097/ACM.0000000000005769
Christopher R Runyon, Miguel A Paniagua, Francine A Rosenthal, Andrea L Veneziano, Lauren McNaughton, Constance T Murray, Polina Harik
{"title":"SHARP (SHort Answer, Rationale Provision): A New Item Format to Assess Clinical Reasoning.","authors":"Christopher R Runyon, Miguel A Paniagua, Francine A Rosenthal, Andrea L Veneziano, Lauren McNaughton, Constance T Murray, Polina Harik","doi":"10.1097/ACM.0000000000005769","DOIUrl":null,"url":null,"abstract":"<p><strong>Problem: </strong>Many non-workplace-based assessments do not provide good evidence of a learner's problem representation or ability to provide a rationale for a clinical decision they have made. Exceptions include assessment formats that require resource-intensive administration and scoring. This article reports on research efforts toward building a scalable non-workplace-based assessment format that was specifically developed to capture evidence of a learner's ability to justify a clinical decision.</p><p><strong>Approach: </strong>The authors developed a 2-step item format called SHARP (SHort Answer, Rationale Provision), referring to the 2 tasks that comprise the item. In collaboration with physician-educators, the authors integrated short-answer questions into a patient medical record-based item starting in October 2021 and arrived at an innovative item format in December 2021. In this format, a test-taker interprets patient medical record data to make a clinical decision, types in their response, and pinpoints medical record details that justify their answers. In January 2022, a total of 177 fourth-year medical students, representing 20 U.S. medical schools, completed 35 SHARP items in a proof-of-concept study.</p><p><strong>Outcomes: </strong>Primary outcomes were item timing, difficulty, reliability, and scoring ease. There was substantial variability in item difficulty, with the average item answered correctly by 44% of students (range, 4%-76%). The estimated reliability (Cronbach α ) of the set of SHARP items was 0.76 (95% confidence interval, 0.70-0.80). Item scoring is fully automated, minimizing resource requirements.</p><p><strong>Next steps: </strong>A larger study is planned to gather additional validity evidence about the item format. This study will allow comparisons between performance on SHARP items and other examinations, examination of group differences in performance, and possible use cases for formative assessment. Cognitive interviews are also planned to better understand the thought processes of medical students as they work through the SHARP items.</p>","PeriodicalId":50929,"journal":{"name":"Academic Medicine","volume":" ","pages":"976-980"},"PeriodicalIF":5.3000,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Academic Medicine","FirstCategoryId":"95","ListUrlMain":"https://doi.org/10.1097/ACM.0000000000005769","RegionNum":2,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2024/5/15 0:00:00","PubModel":"Epub","JCR":"Q1","JCRName":"EDUCATION, SCIENTIFIC DISCIPLINES","Score":null,"Total":0}
引用次数: 0

Abstract

Problem: Many non-workplace-based assessments do not provide good evidence of a learner's problem representation or ability to provide a rationale for a clinical decision they have made. Exceptions include assessment formats that require resource-intensive administration and scoring. This article reports on research efforts toward building a scalable non-workplace-based assessment format that was specifically developed to capture evidence of a learner's ability to justify a clinical decision.

Approach: The authors developed a 2-step item format called SHARP (SHort Answer, Rationale Provision), referring to the 2 tasks that comprise the item. In collaboration with physician-educators, the authors integrated short-answer questions into a patient medical record-based item starting in October 2021 and arrived at an innovative item format in December 2021. In this format, a test-taker interprets patient medical record data to make a clinical decision, types in their response, and pinpoints medical record details that justify their answers. In January 2022, a total of 177 fourth-year medical students, representing 20 U.S. medical schools, completed 35 SHARP items in a proof-of-concept study.

Outcomes: Primary outcomes were item timing, difficulty, reliability, and scoring ease. There was substantial variability in item difficulty, with the average item answered correctly by 44% of students (range, 4%-76%). The estimated reliability (Cronbach α ) of the set of SHARP items was 0.76 (95% confidence interval, 0.70-0.80). Item scoring is fully automated, minimizing resource requirements.

Next steps: A larger study is planned to gather additional validity evidence about the item format. This study will allow comparisons between performance on SHARP items and other examinations, examination of group differences in performance, and possible use cases for formative assessment. Cognitive interviews are also planned to better understand the thought processes of medical students as they work through the SHARP items.

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
SHARP(简短回答,提供理由):评估临床推理能力的新项目格式。
问题:许多非基于工作场所的评估不能很好地证明学习者的问题表述能力或为他们所做的临床决定提供理由的能力。需要耗费大量资源进行管理和评分的评估形式属于例外。本文报告了为建立一种可扩展的非基于工作场所的评估格式所做的研究工作,这种评估格式是专门为获取学习者为其做出的临床决定提供理由的能力证据而开发的:作者开发了一种名为 SHARP(SHort Answer, Rationale Provision)的两步项目格式,指的是构成项目的两项任务。作者与医生教育工作者合作,从 2021 年 10 月开始将简答题整合到基于患者病历的项目中,并于 2021 年 12 月形成了一种创新的项目格式。在这种格式中,应试者通过解释患者病历数据来做出临床决定,输入他们的回答,并指出证明其答案正确的病历细节。2022 年 1 月,代表美国 20 所医学院校的 177 名四年级医学生在概念验证研究中完成了 35 个 SHARP 项目:主要结果是项目时间、难度、可靠性和评分难易度。项目难度存在很大差异,平均 44% 的学生能正确回答项目(范围为 4%-76%)。SHARP项目的估计信度(Cronbach α)为0.76(95% CI,0.70-0.80)。项目评分完全自动化,最大限度地减少了资源需求:下一步:计划开展一项更大规模的研究,以收集更多有关项目格式的有效性证据。这项研究将对 SHARP 项目的成绩和其他考试的成绩进行比较,考察成绩的群体差异,以及可能用于形成性评估的情况。此外,还计划进行认知访谈,以更好地了解医学生在完成 SHARP 项目时的思维过程。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
Academic Medicine
Academic Medicine 医学-卫生保健
CiteScore
7.80
自引率
9.50%
发文量
982
审稿时长
3-6 weeks
期刊介绍: Academic Medicine, the official peer-reviewed journal of the Association of American Medical Colleges, acts as an international forum for exchanging ideas, information, and strategies to address the significant challenges in academic medicine. The journal covers areas such as research, education, clinical care, community collaboration, and leadership, with a commitment to serving the public interest.
期刊最新文献
Validating the 2023 Association of American Medical Colleges Graduate Medical Education Leadership Competencies. World Federation for Medical Education Recognizes 5 International Accrediting Bodies. Irony. Teaching Opportunities for Postgraduate Trainees in Primary Care. How Many Is Too Many? Using Cognitive Load Theory to Determine the Maximum Safe Number of Inpatient Consultations for Trainees.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1