Christopher R Runyon, Miguel A Paniagua, Francine A Rosenthal, Andrea L Veneziano, Lauren McNaughton, Constance T Murray, Polina Harik
{"title":"SHARP (SHort Answer, Rationale Provision): A New Item Format to Assess Clinical Reasoning.","authors":"Christopher R Runyon, Miguel A Paniagua, Francine A Rosenthal, Andrea L Veneziano, Lauren McNaughton, Constance T Murray, Polina Harik","doi":"10.1097/ACM.0000000000005769","DOIUrl":null,"url":null,"abstract":"<p><strong>Problem: </strong>Many non-workplace-based assessments do not provide good evidence of a learner's problem representation or ability to provide a rationale for a clinical decision they have made. Exceptions include assessment formats that require resource-intensive administration and scoring. This article reports on research efforts toward building a scalable non-workplace-based assessment format that was specifically developed to capture evidence of a learner's ability to justify a clinical decision.</p><p><strong>Approach: </strong>The authors developed a 2-step item format called SHARP (SHort Answer, Rationale Provision), referring to the 2 tasks that comprise the item. In collaboration with physician-educators, the authors integrated short-answer questions into a patient medical record-based item starting in October 2021 and arrived at an innovative item format in December 2021. In this format, a test-taker interprets patient medical record data to make a clinical decision, types in their response, and pinpoints medical record details that justify their answers. In January 2022, a total of 177 fourth-year medical students, representing 20 U.S. medical schools, completed 35 SHARP items in a proof-of-concept study.</p><p><strong>Outcomes: </strong>Primary outcomes were item timing, difficulty, reliability, and scoring ease. There was substantial variability in item difficulty, with the average item answered correctly by 44% of students (range, 4%-76%). The estimated reliability (Cronbach α ) of the set of SHARP items was 0.76 (95% confidence interval, 0.70-0.80). Item scoring is fully automated, minimizing resource requirements.</p><p><strong>Next steps: </strong>A larger study is planned to gather additional validity evidence about the item format. This study will allow comparisons between performance on SHARP items and other examinations, examination of group differences in performance, and possible use cases for formative assessment. Cognitive interviews are also planned to better understand the thought processes of medical students as they work through the SHARP items.</p>","PeriodicalId":50929,"journal":{"name":"Academic Medicine","volume":" ","pages":"976-980"},"PeriodicalIF":5.3000,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Academic Medicine","FirstCategoryId":"95","ListUrlMain":"https://doi.org/10.1097/ACM.0000000000005769","RegionNum":2,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2024/5/15 0:00:00","PubModel":"Epub","JCR":"Q1","JCRName":"EDUCATION, SCIENTIFIC DISCIPLINES","Score":null,"Total":0}
引用次数: 0
Abstract
Problem: Many non-workplace-based assessments do not provide good evidence of a learner's problem representation or ability to provide a rationale for a clinical decision they have made. Exceptions include assessment formats that require resource-intensive administration and scoring. This article reports on research efforts toward building a scalable non-workplace-based assessment format that was specifically developed to capture evidence of a learner's ability to justify a clinical decision.
Approach: The authors developed a 2-step item format called SHARP (SHort Answer, Rationale Provision), referring to the 2 tasks that comprise the item. In collaboration with physician-educators, the authors integrated short-answer questions into a patient medical record-based item starting in October 2021 and arrived at an innovative item format in December 2021. In this format, a test-taker interprets patient medical record data to make a clinical decision, types in their response, and pinpoints medical record details that justify their answers. In January 2022, a total of 177 fourth-year medical students, representing 20 U.S. medical schools, completed 35 SHARP items in a proof-of-concept study.
Outcomes: Primary outcomes were item timing, difficulty, reliability, and scoring ease. There was substantial variability in item difficulty, with the average item answered correctly by 44% of students (range, 4%-76%). The estimated reliability (Cronbach α ) of the set of SHARP items was 0.76 (95% confidence interval, 0.70-0.80). Item scoring is fully automated, minimizing resource requirements.
Next steps: A larger study is planned to gather additional validity evidence about the item format. This study will allow comparisons between performance on SHARP items and other examinations, examination of group differences in performance, and possible use cases for formative assessment. Cognitive interviews are also planned to better understand the thought processes of medical students as they work through the SHARP items.
期刊介绍:
Academic Medicine, the official peer-reviewed journal of the Association of American Medical Colleges, acts as an international forum for exchanging ideas, information, and strategies to address the significant challenges in academic medicine. The journal covers areas such as research, education, clinical care, community collaboration, and leadership, with a commitment to serving the public interest.