Todd A Guth, Rachel M Wolfe, Ofelia Martinez, Raja G Subhiyah, Jerusha J Henderek, Caroline McAllister, Danielle Roussel
{"title":"Assessment of Clinical Reasoning in Undergraduate Medical Education: A Pragmatic Approach to Programmatic Assessment.","authors":"Todd A Guth, Rachel M Wolfe, Ofelia Martinez, Raja G Subhiyah, Jerusha J Henderek, Caroline McAllister, Danielle Roussel","doi":"10.1097/ACM.0000000000005665","DOIUrl":null,"url":null,"abstract":"<p><strong>Purpose: </strong>Clinical reasoning, a complex construct integral to the practice of medicine, has been challenging to define, teach, and assess. Programmatic assessment purports to overcome validity limitations of judgments made from individual assessments through proportionality and triangulation processes. This study explored a pragmatic approach to the programmatic assessment of clinical reasoning.</p><p><strong>Method: </strong>The study analyzed data from 2 student cohorts from the University of Utah School of Medicine (UUSOM) (n = 113 in cohort 1 and 119 in cohort 2) and 1 cohort from the University of Colorado School of Medicine (CUSOM) using assessment data that spanned from 2017 to 2021 (n = 199). The study methods included the following: (1) asking faculty judges to categorize student clinical reasoning skills, (2) selecting institution-specific assessment data conceptually aligned with clinical reasoning, (3) calculating correlations between assessment data and faculty judgments, and (4) developing regression models between assessment data and faculty judgments.</p><p><strong>Results: </strong>Faculty judgments of student clinical reasoning skills were converted to a continuous variable of clinical reasoning struggles, with mean (SD) ratings of 2.93 (0.27) for the 232 UUSOM students and 2.96 (0.17) for the 199 CUSOM students. A total of 67 and 32 discrete assessment variables were included from the UUSOM and CUSOM, respectively. Pearson r correlations were moderate to strong between many individual and composite assessment variables and faculty judgments. Regression models demonstrated an overall adjusted R2 (standard error of the estimate) of 0.50 (0.19) for UUSOM cohort 1, 0.28 (0.15) for UUSOM cohort 2, and 0.30 (0.14) for CUSOM.</p><p><strong>Conclusions: </strong>This study represents an early pragmatic exploration of regression analysis as a potential tool for operationalizing the proportionality and triangulation principles of programmatic assessment. The study found that programmatic assessment may be a useful framework for longitudinal assessment of complicated constructs, such as clinical reasoning.</p>","PeriodicalId":50929,"journal":{"name":"Academic Medicine","volume":null,"pages":null},"PeriodicalIF":5.3000,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Academic Medicine","FirstCategoryId":"95","ListUrlMain":"https://doi.org/10.1097/ACM.0000000000005665","RegionNum":2,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2024/2/27 0:00:00","PubModel":"Epub","JCR":"Q1","JCRName":"EDUCATION, SCIENTIFIC DISCIPLINES","Score":null,"Total":0}
引用次数: 0
Abstract
Purpose: Clinical reasoning, a complex construct integral to the practice of medicine, has been challenging to define, teach, and assess. Programmatic assessment purports to overcome validity limitations of judgments made from individual assessments through proportionality and triangulation processes. This study explored a pragmatic approach to the programmatic assessment of clinical reasoning.
Method: The study analyzed data from 2 student cohorts from the University of Utah School of Medicine (UUSOM) (n = 113 in cohort 1 and 119 in cohort 2) and 1 cohort from the University of Colorado School of Medicine (CUSOM) using assessment data that spanned from 2017 to 2021 (n = 199). The study methods included the following: (1) asking faculty judges to categorize student clinical reasoning skills, (2) selecting institution-specific assessment data conceptually aligned with clinical reasoning, (3) calculating correlations between assessment data and faculty judgments, and (4) developing regression models between assessment data and faculty judgments.
Results: Faculty judgments of student clinical reasoning skills were converted to a continuous variable of clinical reasoning struggles, with mean (SD) ratings of 2.93 (0.27) for the 232 UUSOM students and 2.96 (0.17) for the 199 CUSOM students. A total of 67 and 32 discrete assessment variables were included from the UUSOM and CUSOM, respectively. Pearson r correlations were moderate to strong between many individual and composite assessment variables and faculty judgments. Regression models demonstrated an overall adjusted R2 (standard error of the estimate) of 0.50 (0.19) for UUSOM cohort 1, 0.28 (0.15) for UUSOM cohort 2, and 0.30 (0.14) for CUSOM.
Conclusions: This study represents an early pragmatic exploration of regression analysis as a potential tool for operationalizing the proportionality and triangulation principles of programmatic assessment. The study found that programmatic assessment may be a useful framework for longitudinal assessment of complicated constructs, such as clinical reasoning.
期刊介绍:
Academic Medicine, the official peer-reviewed journal of the Association of American Medical Colleges, acts as an international forum for exchanging ideas, information, and strategies to address the significant challenges in academic medicine. The journal covers areas such as research, education, clinical care, community collaboration, and leadership, with a commitment to serving the public interest.