Pub Date : 2024-09-01Epub Date: 2024-06-17DOI: 10.1097/ACM.0000000000005790
Mark Ehioghae, Nana Danso, Pinky Jha
{"title":"Lessons Learned From a Mentorship Platform for Underrepresented Minority Medical Students.","authors":"Mark Ehioghae, Nana Danso, Pinky Jha","doi":"10.1097/ACM.0000000000005790","DOIUrl":"https://doi.org/10.1097/ACM.0000000000005790","url":null,"abstract":"","PeriodicalId":50929,"journal":{"name":"Academic Medicine","volume":null,"pages":null},"PeriodicalIF":5.3,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142082500","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-01Epub Date: 2024-07-29DOI: 10.1097/ACM.0000000000005749
David A Cook, Christopher R Stephenson
Purpose: Learner engagement is the energy learners exert to remain focused and motivated to learn. The Learner Engagement Instrument (LEI) was developed to measure learner engagement in a short continuing professional development (CPD) activity. The authors validated LEI scores using validity evidence of internal structure and relationships with other variables.
Method: Participants attended 1 of 4 CPD courses (1 in-person, 2 online livestreamed, and 1 either in-person or livestreamed) in 2018, 2020, 2021, and 2022. Confirmatory factor analysis was used to examine model fit for several alternative structural models, separately for each course. The authors also conducted a generalizability study to estimate score reliability. Associations were evaluated between LEI scores and Continuing Medical Education Teaching Effectiveness (CMETE) scores and participant demographics. Statistical methods accounted for repeated measures by participants.
Results: Four hundred fifteen unique participants attended 203 different CPD presentations and completed the LEI 11,567 times. The originally hypothesized 4-domain model of learner engagement (domains: emotional, behavioral, cognitive in-class, cognitive out-of-class) demonstrated best model fit in all 4 courses, with comparative fit index ≥ 0.99, standardized root mean square residual ≤ 0.031, and root mean square error of approximation ≤ 0.047. The reliability for overall scores and domain scores were all acceptable (50-rater G-coefficient ≥ 0.74) except for the cognitive in-class domain (50-rater G-coefficient of 0.55 to 0.66). Findings were similar for both in-person and online delivery modalities. Correlation of LEI scores with teaching effectiveness was confirmed (rho=0.58), and a small correlation was found with participant age (rho=0.19); other associations were small and not statistically significant. Using these findings, we generated a shortened 4-item instrument, the LEI Short Form.
Conclusions: This study confirms a 4-domain model of learner engagement and provides validity evidence that supports using LEI scores to measure learner engagement in both in-person and livestreamed CPD activities.
目的:学习者投入度是指学习者为保持学习的专注性和积极性而付出的精力。学习者参与度工具(LEI)是为了测量学习者在短期持续专业发展(CPD)活动中的参与度而开发的。作者利用内部结构的有效性证据以及与其他变量的关系对 LEI 分数进行了验证:参与者参加了 2018 年、2020 年、2021 年和 2022 年的 4 门 CPD 课程(1 门面授课程、2 门在线直播课程和 1 门面授或在线课程)中的 1 门。作者分别对每门课程进行了确认性因子分析,以检验几种可供选择的结构模型的模型拟合度。作者还进行了一项可推广性研究,以估算分数的可靠性。还评估了 LEI 分数与继续医学教育教学效果 (CMETE) 分数和学员人口统计学之间的关联。所有统计方法都考虑了参与者的重复测量:415名参与者参加了203场不同的继续医学教育讲座,完成了11,567次LEI。最初假设的学习者参与度 4 领域模型(领域:情感、行为、课内认知、课外认知)在所有 4 门课程中均表现出最佳模型拟合度,比较拟合指数≥0.99,标准化均方根残差≤0.031,均方根近似误差≤0.047。除课堂认知领域(50 人 G 系数为 0.55 至 0.66)外,总分和领域分的信度均可接受(50 人 G 系数≥ 0.74)。所有结果在面授和在线授课模式下都相似。LEI 分数与教学效果的相关性得到了证实(rho 0.58),与参与者年龄的相关性较小(rho 0.19);其他相关性较小,且无统计学意义。根据这些研究结果,我们制作了一个简短的 4 个项目的工具,即 LEI 简表:本研究证实了学习者参与度的 4 领域模型,并提供了有效性证据,支持使用 LEI 分数来衡量学习者在现场和直播 CPD 活动中的参与度。
{"title":"Validation of the Learner Engagement Instrument for Continuing Professional Development.","authors":"David A Cook, Christopher R Stephenson","doi":"10.1097/ACM.0000000000005749","DOIUrl":"10.1097/ACM.0000000000005749","url":null,"abstract":"<p><strong>Purpose: </strong>Learner engagement is the energy learners exert to remain focused and motivated to learn. The Learner Engagement Instrument (LEI) was developed to measure learner engagement in a short continuing professional development (CPD) activity. The authors validated LEI scores using validity evidence of internal structure and relationships with other variables.</p><p><strong>Method: </strong>Participants attended 1 of 4 CPD courses (1 in-person, 2 online livestreamed, and 1 either in-person or livestreamed) in 2018, 2020, 2021, and 2022. Confirmatory factor analysis was used to examine model fit for several alternative structural models, separately for each course. The authors also conducted a generalizability study to estimate score reliability. Associations were evaluated between LEI scores and Continuing Medical Education Teaching Effectiveness (CMETE) scores and participant demographics. Statistical methods accounted for repeated measures by participants.</p><p><strong>Results: </strong>Four hundred fifteen unique participants attended 203 different CPD presentations and completed the LEI 11,567 times. The originally hypothesized 4-domain model of learner engagement (domains: emotional, behavioral, cognitive in-class, cognitive out-of-class) demonstrated best model fit in all 4 courses, with comparative fit index ≥ 0.99, standardized root mean square residual ≤ 0.031, and root mean square error of approximation ≤ 0.047. The reliability for overall scores and domain scores were all acceptable (50-rater G-coefficient ≥ 0.74) except for the cognitive in-class domain (50-rater G-coefficient of 0.55 to 0.66). Findings were similar for both in-person and online delivery modalities. Correlation of LEI scores with teaching effectiveness was confirmed (rho=0.58), and a small correlation was found with participant age (rho=0.19); other associations were small and not statistically significant. Using these findings, we generated a shortened 4-item instrument, the LEI Short Form.</p><p><strong>Conclusions: </strong>This study confirms a 4-domain model of learner engagement and provides validity evidence that supports using LEI scores to measure learner engagement in both in-person and livestreamed CPD activities.</p>","PeriodicalId":50929,"journal":{"name":"Academic Medicine","volume":null,"pages":null},"PeriodicalIF":5.3,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140873570","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-01Epub Date: 2024-05-15DOI: 10.1097/ACM.0000000000005769
Christopher R Runyon, Miguel A Paniagua, Francine A Rosenthal, Andrea L Veneziano, Lauren McNaughton, Constance T Murray, Polina Harik
Problem: Many non-workplace-based assessments do not provide good evidence of a learner's problem representation or ability to provide a rationale for a clinical decision they have made. Exceptions include assessment formats that require resource-intensive administration and scoring. This article reports on research efforts toward building a scalable non-workplace-based assessment format that was specifically developed to capture evidence of a learner's ability to justify a clinical decision.
Approach: The authors developed a 2-step item format called SHARP (SHort Answer, Rationale Provision), referring to the 2 tasks that comprise the item. In collaboration with physician-educators, the authors integrated short-answer questions into a patient medical record-based item starting in October 2021 and arrived at an innovative item format in December 2021. In this format, a test-taker interprets patient medical record data to make a clinical decision, types in their response, and pinpoints medical record details that justify their answers. In January 2022, a total of 177 fourth-year medical students, representing 20 U.S. medical schools, completed 35 SHARP items in a proof-of-concept study.
Outcomes: Primary outcomes were item timing, difficulty, reliability, and scoring ease. There was substantial variability in item difficulty, with the average item answered correctly by 44% of students (range, 4%-76%). The estimated reliability (Cronbach α ) of the set of SHARP items was 0.76 (95% confidence interval, 0.70-0.80). Item scoring is fully automated, minimizing resource requirements.
Next steps: A larger study is planned to gather additional validity evidence about the item format. This study will allow comparisons between performance on SHARP items and other examinations, examination of group differences in performance, and possible use cases for formative assessment. Cognitive interviews are also planned to better understand the thought processes of medical students as they work through the SHARP items.
{"title":"SHARP (SHort Answer, Rationale Provision): A New Item Format to Assess Clinical Reasoning.","authors":"Christopher R Runyon, Miguel A Paniagua, Francine A Rosenthal, Andrea L Veneziano, Lauren McNaughton, Constance T Murray, Polina Harik","doi":"10.1097/ACM.0000000000005769","DOIUrl":"10.1097/ACM.0000000000005769","url":null,"abstract":"<p><strong>Problem: </strong>Many non-workplace-based assessments do not provide good evidence of a learner's problem representation or ability to provide a rationale for a clinical decision they have made. Exceptions include assessment formats that require resource-intensive administration and scoring. This article reports on research efforts toward building a scalable non-workplace-based assessment format that was specifically developed to capture evidence of a learner's ability to justify a clinical decision.</p><p><strong>Approach: </strong>The authors developed a 2-step item format called SHARP (SHort Answer, Rationale Provision), referring to the 2 tasks that comprise the item. In collaboration with physician-educators, the authors integrated short-answer questions into a patient medical record-based item starting in October 2021 and arrived at an innovative item format in December 2021. In this format, a test-taker interprets patient medical record data to make a clinical decision, types in their response, and pinpoints medical record details that justify their answers. In January 2022, a total of 177 fourth-year medical students, representing 20 U.S. medical schools, completed 35 SHARP items in a proof-of-concept study.</p><p><strong>Outcomes: </strong>Primary outcomes were item timing, difficulty, reliability, and scoring ease. There was substantial variability in item difficulty, with the average item answered correctly by 44% of students (range, 4%-76%). The estimated reliability (Cronbach α ) of the set of SHARP items was 0.76 (95% confidence interval, 0.70-0.80). Item scoring is fully automated, minimizing resource requirements.</p><p><strong>Next steps: </strong>A larger study is planned to gather additional validity evidence about the item format. This study will allow comparisons between performance on SHARP items and other examinations, examination of group differences in performance, and possible use cases for formative assessment. Cognitive interviews are also planned to better understand the thought processes of medical students as they work through the SHARP items.</p>","PeriodicalId":50929,"journal":{"name":"Academic Medicine","volume":null,"pages":null},"PeriodicalIF":5.3,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140960618","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-01Epub Date: 2024-06-12DOI: 10.1097/ACM.0000000000005787
Gabriella Schmuter, Robert A Beale
{"title":"Engaging Learners With the Utility of Electronic Medical Record Templates in Patient Note Writing.","authors":"Gabriella Schmuter, Robert A Beale","doi":"10.1097/ACM.0000000000005787","DOIUrl":"10.1097/ACM.0000000000005787","url":null,"abstract":"","PeriodicalId":50929,"journal":{"name":"Academic Medicine","volume":null,"pages":null},"PeriodicalIF":5.3,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141312215","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-01Epub Date: 2024-05-01DOI: 10.1097/ACM.0000000000005747
Capri P Alex, H Barrett Fromme, Larrie Greenberg, Michael S Ryan, Sarah Gustafson, Maya K Neeley, Shaughn Nunez, Molly E Rideout, Jessica VanNostrand, Nicola M Orlov
Purpose: Direct observation (DO) enables assessment of vital competencies, such as clinical skills. Despite national requirement that medical students experience DOs during each clerkship, the frequency, length, quality, and context of these DOs are not well established. This study examines the quality, quantity, and characteristics of DOs obtained during pediatrics clerkships across multiple institutions.
Method: This multimethod study was performed at 6 U.S.-based institutions from March to October 2022. In the qualitative phase, focus groups and/or semistructured interviews were conducted with third-year medical students at the conclusion of pediatrics clerkships. In the quantitative phase, the authors administered an internally developed instrument after focus group discussions or interviews. Qualitative data were analyzed using thematic analysis, and quantitative data were analyzed using anonymous survey responses.
Results: Seventy-three medical students participated in 20 focus groups, and 71 (97.3%) completed the survey. The authors identified 7 themes that were organized into key principles: before, during, and after DO. Most students reported their DOs were conducted primarily by residents (62 [87.3%]) rather than attendings (6 [8.4%]) in inpatient settings. Participants reported daily attending observation of clinical reasoning (38 [53.5%]), communication (39 [54.9%]), and presentation skills (58 [81.7%]). One-third reported they were never observed taking a history by an inpatient attending (23 [32.4%]), and one-quarter reported they were never observed performing a physical exam (18 [25.4%]).
Conclusions: This study revealed that students are not being assessed for performing vital clinical skills in the inpatient setting by attendings as frequently as previously believed. When observers set expectations, create a safe learning environment, and follow up with actionable feedback, medical students perceive the experience as valuable; however, the DO experience is currently suboptimal. Therefore, a high-quality, competency-based clinical education for medical students is necessary to directly drive future patient care by way of a competent physician workforce.
{"title":"Exploring Medical Student Experiences With Direct Observation During the Pediatric Clerkship.","authors":"Capri P Alex, H Barrett Fromme, Larrie Greenberg, Michael S Ryan, Sarah Gustafson, Maya K Neeley, Shaughn Nunez, Molly E Rideout, Jessica VanNostrand, Nicola M Orlov","doi":"10.1097/ACM.0000000000005747","DOIUrl":"10.1097/ACM.0000000000005747","url":null,"abstract":"<p><strong>Purpose: </strong>Direct observation (DO) enables assessment of vital competencies, such as clinical skills. Despite national requirement that medical students experience DOs during each clerkship, the frequency, length, quality, and context of these DOs are not well established. This study examines the quality, quantity, and characteristics of DOs obtained during pediatrics clerkships across multiple institutions.</p><p><strong>Method: </strong>This multimethod study was performed at 6 U.S.-based institutions from March to October 2022. In the qualitative phase, focus groups and/or semistructured interviews were conducted with third-year medical students at the conclusion of pediatrics clerkships. In the quantitative phase, the authors administered an internally developed instrument after focus group discussions or interviews. Qualitative data were analyzed using thematic analysis, and quantitative data were analyzed using anonymous survey responses.</p><p><strong>Results: </strong>Seventy-three medical students participated in 20 focus groups, and 71 (97.3%) completed the survey. The authors identified 7 themes that were organized into key principles: before, during, and after DO. Most students reported their DOs were conducted primarily by residents (62 [87.3%]) rather than attendings (6 [8.4%]) in inpatient settings. Participants reported daily attending observation of clinical reasoning (38 [53.5%]), communication (39 [54.9%]), and presentation skills (58 [81.7%]). One-third reported they were never observed taking a history by an inpatient attending (23 [32.4%]), and one-quarter reported they were never observed performing a physical exam (18 [25.4%]).</p><p><strong>Conclusions: </strong>This study revealed that students are not being assessed for performing vital clinical skills in the inpatient setting by attendings as frequently as previously believed. When observers set expectations, create a safe learning environment, and follow up with actionable feedback, medical students perceive the experience as valuable; however, the DO experience is currently suboptimal. Therefore, a high-quality, competency-based clinical education for medical students is necessary to directly drive future patient care by way of a competent physician workforce.</p>","PeriodicalId":50929,"journal":{"name":"Academic Medicine","volume":null,"pages":null},"PeriodicalIF":5.3,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140861330","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-01Epub Date: 2024-03-14DOI: 10.1097/ACM.0000000000005690
Paulina Perez Mejias, Gustavo Lara, Alex Duran, Rashelle Musci, Nancy A Hueppchen, Roy C Ziegelstein, Pamela A Lipsett
Purpose: To determine whether students' self-reported race/ethnicity and sex were associated with grades earned in 7 core clerkships. A person-centered approach was used to group students based on observed clerkship grade patterns. Predictors of group membership and predictive bias by race/ethnicity and sex were investigated.
Method: Using data from 6 medical student cohorts at Johns Hopkins University School of Medicine (JHUSOM), latent class analysis was used to classify students based on clerkship grades. Multinomial logistic regression was employed to investigate if preclerkship measures and student demographic characteristics predicted clerkship performance-level groups. Marginal effects for United States Medical Licensing Exam (USMLE) Step 1 scores were obtained to assess the predictive validity of the test on group membership by race/ethnicity and sex. Predictive bias was examined by comparing multinomial logistic regression prediction errors across racial/ethnic groups.
Results: Three clerkship performance-level groups emerged from the data: low, middle, and high. Significant predictors of group membership were race/ethnicity, sex, and USMLE Step 1 scores. Black or African American students were more likely (odds ratio [OR] = 4.26) to be low performers than White students. Black or African American (OR = 0.08) and Asian students (OR = 0.41) were less likely to be high performers than White students. Female students (OR = 2.51) were more likely to be high performers than male students. Patterns of prediction errors observed across racial/ethnic groups showed predictive bias when using USMLE Step 1 scores to predict clerkship performance-level groups.
Conclusions: Disparities in clerkship grades associated with race/ethnicity were found among JHUSOM students, which persisted after controlling for USMLE Step 1 scores, sex, and other preclerkship performance measures. Differential predictive validity of USMLE Step 1 exam scores and systematic error predictions by race/ethnicity show predictive bias when using USMLE Step 1 scores to predict clerkship performance across racial/ethnic groups.
{"title":"Disparities in Medical School Clerkship Grades Associated With Sex, Race, and Ethnicity: A Person-Centered Approach.","authors":"Paulina Perez Mejias, Gustavo Lara, Alex Duran, Rashelle Musci, Nancy A Hueppchen, Roy C Ziegelstein, Pamela A Lipsett","doi":"10.1097/ACM.0000000000005690","DOIUrl":"10.1097/ACM.0000000000005690","url":null,"abstract":"<p><strong>Purpose: </strong>To determine whether students' self-reported race/ethnicity and sex were associated with grades earned in 7 core clerkships. A person-centered approach was used to group students based on observed clerkship grade patterns. Predictors of group membership and predictive bias by race/ethnicity and sex were investigated.</p><p><strong>Method: </strong>Using data from 6 medical student cohorts at Johns Hopkins University School of Medicine (JHUSOM), latent class analysis was used to classify students based on clerkship grades. Multinomial logistic regression was employed to investigate if preclerkship measures and student demographic characteristics predicted clerkship performance-level groups. Marginal effects for United States Medical Licensing Exam (USMLE) Step 1 scores were obtained to assess the predictive validity of the test on group membership by race/ethnicity and sex. Predictive bias was examined by comparing multinomial logistic regression prediction errors across racial/ethnic groups.</p><p><strong>Results: </strong>Three clerkship performance-level groups emerged from the data: low, middle, and high. Significant predictors of group membership were race/ethnicity, sex, and USMLE Step 1 scores. Black or African American students were more likely (odds ratio [OR] = 4.26) to be low performers than White students. Black or African American (OR = 0.08) and Asian students (OR = 0.41) were less likely to be high performers than White students. Female students (OR = 2.51) were more likely to be high performers than male students. Patterns of prediction errors observed across racial/ethnic groups showed predictive bias when using USMLE Step 1 scores to predict clerkship performance-level groups.</p><p><strong>Conclusions: </strong>Disparities in clerkship grades associated with race/ethnicity were found among JHUSOM students, which persisted after controlling for USMLE Step 1 scores, sex, and other preclerkship performance measures. Differential predictive validity of USMLE Step 1 exam scores and systematic error predictions by race/ethnicity show predictive bias when using USMLE Step 1 scores to predict clerkship performance across racial/ethnic groups.</p>","PeriodicalId":50929,"journal":{"name":"Academic Medicine","volume":null,"pages":null},"PeriodicalIF":5.3,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140137492","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-01Epub Date: 2024-06-26DOI: 10.1097/ACM.0000000000005793
Gustavo A Patino, Laura Weiss Roberts
{"title":"The Need for Greater Transparency in Journal Submissions That Report Novel Machine Learning Models in Health Professions Education.","authors":"Gustavo A Patino, Laura Weiss Roberts","doi":"10.1097/ACM.0000000000005793","DOIUrl":"10.1097/ACM.0000000000005793","url":null,"abstract":"","PeriodicalId":50929,"journal":{"name":"Academic Medicine","volume":null,"pages":null},"PeriodicalIF":5.3,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141460561","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-01Epub Date: 2024-06-12DOI: 10.1097/ACM.0000000000005785
Nam S Danny Hoang
{"title":"Protecting and Learning From LGBTQ Students.","authors":"Nam S Danny Hoang","doi":"10.1097/ACM.0000000000005785","DOIUrl":"10.1097/ACM.0000000000005785","url":null,"abstract":"","PeriodicalId":50929,"journal":{"name":"Academic Medicine","volume":null,"pages":null},"PeriodicalIF":5.3,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141312218","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-01Epub Date: 2024-05-07DOI: 10.1097/ACM.0000000000005754
Lauren Clarke
Abstract: Trainees (medical students, residents, and fellows) are beginning to make strides in pushing for changes to their education. While there are many examples of successful trainee-led curriculum reform efforts, the path to success remains unclear. To better understand the process of trainee-driven curricular advocacy, the author analyzes this process through the lens of ecological systems theory (EST) not only to provide readers with context for the barriers and facilitators to trainee-driven curricular advocacy but also to further medical education's understanding of the sociopolitical forces influencing the process of trainee-driven curricular advocacy and reform through the lens of the trainee. EST explains how individuals are influenced by a complex web of social and environmental forces. The theory outlines 5 ecological systems of influence: the microsystem, mesosystem, exosystem, macrosystem, and chronosystem. Using EST to explore the process of trainee-driven curricular advocacy therefore clarifies the many layers of influence that trainees must navigate while advocating for curriculum change. The author then draws on this theory and their own experience as a medical student advocating for local and national curriculum reform to develop a model to facilitate trainee-driven curricular advocacy in medical education. The proposed model outlines concrete steps trainees can take while going through the process of curricular advocacy both within their own institutions and on a national level. Through developing this model, the author hopes not only to empower trainees to become agents of change in medical education but also to encourage faculty members and administrators within health professional training programs to support trainees in these efforts.
摘要:受训人员(医学生、住院医师和研究员)开始大步推动教育改革。虽然有许多由受训者主导的课程改革取得成功的例子,但通往成功的道路仍不明确。为了更好地理解受训者主导的课程倡导过程,作者通过生态系统论(EST)的视角对这一过程进行了分析,不仅为读者提供了受训者主导的课程倡导过程中的障碍和促进因素,还通过受训者的视角进一步加深了医学教育对影响受训者主导的课程倡导和改革过程的社会政治力量的理解。EST 解释了个人如何受到复杂的社会和环境力量网络的影响。该理论概述了 5 个生态影响系统:微观系统、中观系统、外 观系统、宏观系统和时间系统。因此,利用 EST 来探讨受训人员推动课程倡导的过程,可以阐明受训人员在倡导课程变革时必须要驾驭的多层次影响。然后,作者借鉴这一理论和自己作为医学生倡导地方和国家课程改革的经验,建立了一个模型,以促进医学教育中由受训者驱动的课程倡导。所提出的模式概述了受训者在其所在机构和国家层面进行课程倡导过程中可以采取的具体步骤。作者希望通过建立这一模式,不仅能使受训人员成为医学教育改革的推动者,还能鼓励卫生专业培训项目的教师和管理人员支持受训人员的这些努力。
{"title":"Trainees as Agents of Change: A Theory-Informed Model for Trainee-Driven Curricular Advocacy in Medical Education.","authors":"Lauren Clarke","doi":"10.1097/ACM.0000000000005754","DOIUrl":"10.1097/ACM.0000000000005754","url":null,"abstract":"<p><strong>Abstract: </strong>Trainees (medical students, residents, and fellows) are beginning to make strides in pushing for changes to their education. While there are many examples of successful trainee-led curriculum reform efforts, the path to success remains unclear. To better understand the process of trainee-driven curricular advocacy, the author analyzes this process through the lens of ecological systems theory (EST) not only to provide readers with context for the barriers and facilitators to trainee-driven curricular advocacy but also to further medical education's understanding of the sociopolitical forces influencing the process of trainee-driven curricular advocacy and reform through the lens of the trainee. EST explains how individuals are influenced by a complex web of social and environmental forces. The theory outlines 5 ecological systems of influence: the microsystem, mesosystem, exosystem, macrosystem, and chronosystem. Using EST to explore the process of trainee-driven curricular advocacy therefore clarifies the many layers of influence that trainees must navigate while advocating for curriculum change. The author then draws on this theory and their own experience as a medical student advocating for local and national curriculum reform to develop a model to facilitate trainee-driven curricular advocacy in medical education. The proposed model outlines concrete steps trainees can take while going through the process of curricular advocacy both within their own institutions and on a national level. Through developing this model, the author hopes not only to empower trainees to become agents of change in medical education but also to encourage faculty members and administrators within health professional training programs to support trainees in these efforts.</p>","PeriodicalId":50929,"journal":{"name":"Academic Medicine","volume":null,"pages":null},"PeriodicalIF":5.3,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140899976","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-01Epub Date: 2024-06-12DOI: 10.1097/ACM.0000000000005789
Julie Browne, Alison Bullock, Derek Gallen, John Jenkins
{"title":"It Is Time to Recognize Health Professions Educator Competencies.","authors":"Julie Browne, Alison Bullock, Derek Gallen, John Jenkins","doi":"10.1097/ACM.0000000000005789","DOIUrl":"10.1097/ACM.0000000000005789","url":null,"abstract":"","PeriodicalId":50929,"journal":{"name":"Academic Medicine","volume":null,"pages":null},"PeriodicalIF":5.3,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141312217","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}