Premedical or admissions competencies define a standard set of expectations for medical school candidates to guide their preparation for and application to medical school. These competencies also support medical school admissions committees in advancing fair and efficient application review. In their article updating the 2011 premedical competencies with a revised model consisting of 17 competencies that span professional, science, and thinking and reasoning domains, Wasserman and colleagues have attempted to characterize a qualified medical school applicant poised for success. While these competencies constitute a laudable effort, additional refinement may be needed over time to address emerging areas in medical education and health care. These include incorporating qualitative reasoning, fostering artificial intelligence literacy, emphasizing feedback literacy, and articulating the skills required to reconcile competing needs, ultimately reframing professionalism as a complex, adaptive challenge of developing professional identity within a demanding environment. To operationalize these competencies, a robust toolkit with resources, assessment tools, and training materials will be needed. To understand how well the updated competencies capture the enduring and evolving expectations for incoming medical students, medical educators must ask what problems these competencies are trying to solve. In this commentary, the authors propose 3 problems that admissions competencies could solve: (1) linking education to health outcomes, (2) promoting fairness in admissions, and (3) cultivating physicians skilled in relationship building and team functioning. For the competencies to address these problems, medical schools must continue to commit to holistic review processes that evaluate applicants within their unique contexts and opportunities. Focusing the admissions competency framework on behavior-based guidance, rather than on prescriptive experiences, creates a more accessible pathway to medical education that honors applicants' backgrounds, while identifying qualified candidates with the potential to become compassionate, adaptable physicians prepared for the ever-evolving health care landscape. Teaser text: This commentary reflects on updated premedical competencies that characterize a qualified medical school applicant. Forward looking premedical student competencies should incorporate qualitative reasoning, fostering artificial intelligence literacy, emphasizing feedback literacy, and articulating the skills required to reconcile competing needs, ultimately reframing professionalism as a complex, adaptive challenge of developing professional identity within a demanding environment. Used well, premedical competencies can facilitate linking education to health outcomes, promote fairness in admissions, and cultivate physicians' skills in relationship building and team functioning.
{"title":"Medical school admissions competencies: what problems need to be solved?","authors":"Karen E Hauer, Erick Hung","doi":"10.1093/acamed/wvaf053","DOIUrl":"https://doi.org/10.1093/acamed/wvaf053","url":null,"abstract":"<p><p>Premedical or admissions competencies define a standard set of expectations for medical school candidates to guide their preparation for and application to medical school. These competencies also support medical school admissions committees in advancing fair and efficient application review. In their article updating the 2011 premedical competencies with a revised model consisting of 17 competencies that span professional, science, and thinking and reasoning domains, Wasserman and colleagues have attempted to characterize a qualified medical school applicant poised for success. While these competencies constitute a laudable effort, additional refinement may be needed over time to address emerging areas in medical education and health care. These include incorporating qualitative reasoning, fostering artificial intelligence literacy, emphasizing feedback literacy, and articulating the skills required to reconcile competing needs, ultimately reframing professionalism as a complex, adaptive challenge of developing professional identity within a demanding environment. To operationalize these competencies, a robust toolkit with resources, assessment tools, and training materials will be needed. To understand how well the updated competencies capture the enduring and evolving expectations for incoming medical students, medical educators must ask what problems these competencies are trying to solve. In this commentary, the authors propose 3 problems that admissions competencies could solve: (1) linking education to health outcomes, (2) promoting fairness in admissions, and (3) cultivating physicians skilled in relationship building and team functioning. For the competencies to address these problems, medical schools must continue to commit to holistic review processes that evaluate applicants within their unique contexts and opportunities. Focusing the admissions competency framework on behavior-based guidance, rather than on prescriptive experiences, creates a more accessible pathway to medical education that honors applicants' backgrounds, while identifying qualified candidates with the potential to become compassionate, adaptable physicians prepared for the ever-evolving health care landscape. Teaser text: This commentary reflects on updated premedical competencies that characterize a qualified medical school applicant. Forward looking premedical student competencies should incorporate qualitative reasoning, fostering artificial intelligence literacy, emphasizing feedback literacy, and articulating the skills required to reconcile competing needs, ultimately reframing professionalism as a complex, adaptive challenge of developing professional identity within a demanding environment. Used well, premedical competencies can facilitate linking education to health outcomes, promote fairness in admissions, and cultivate physicians' skills in relationship building and team functioning.</p>","PeriodicalId":50929,"journal":{"name":"Academic Medicine","volume":" ","pages":""},"PeriodicalIF":5.2,"publicationDate":"2025-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146127315","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Heather A Billings, Stacey Pylman, Larry Hurtubise, Judy Blebea, David P Way, Rachel Moquin, Anthony Gaynier, Deborah Simpson
The demand for high-quality faculty development in medical education is more acute than ever. Faculty developers' responsibilities are growing exponentially as they are called upon to help medical faculty maintain expertise in an evolving array of content domains, curricular and assessment strategies, education technologies, and active learning methods. Faculty development has a long history of supporting medical education colleagues in times of change. More recently expectations for faculty developers are expanding due to (1) increasing enrollment of learners, (2) emphasis on learner-centered teaching practices and competency-based assessment methods, and (3) heightened accreditation standards for training faculty. Yet, faculty development in medical education lacks the conventional structures of a profession, such as an adopted set of competencies. In 2023, a series of sessions at regional, national, and international medical education conferences were held to generate ideas and collect examples of faculty developer competencies from over 100 stakeholders. More than 500 responses were gathered from prompts such as "What are the competencies faculty developers need?" and "What is one of the most valuable faculty developer competencies today?" This perspective offers both a rationale for establishing faculty development in medical education as a profession and a path forward through the provision and broad promotion of clearly defined core competencies. The competencies outlined are intended to help inform those entering or currently working in the profession, recruiters of faculty developers, medical education leaders, institutional partners, those conducting performance appraisals of faculty developers, and the broader medical education community. The aim of this work is to generate discussion among vested medical education and faculty development stakeholders who seek to provide structure, clarify roles, and further define medical education faculty development as a profession.
{"title":"Faculty developers in academic medicine: roles and competencies in times of change.","authors":"Heather A Billings, Stacey Pylman, Larry Hurtubise, Judy Blebea, David P Way, Rachel Moquin, Anthony Gaynier, Deborah Simpson","doi":"10.1093/acamed/wvaf054","DOIUrl":"https://doi.org/10.1093/acamed/wvaf054","url":null,"abstract":"<p><p>The demand for high-quality faculty development in medical education is more acute than ever. Faculty developers' responsibilities are growing exponentially as they are called upon to help medical faculty maintain expertise in an evolving array of content domains, curricular and assessment strategies, education technologies, and active learning methods. Faculty development has a long history of supporting medical education colleagues in times of change. More recently expectations for faculty developers are expanding due to (1) increasing enrollment of learners, (2) emphasis on learner-centered teaching practices and competency-based assessment methods, and (3) heightened accreditation standards for training faculty. Yet, faculty development in medical education lacks the conventional structures of a profession, such as an adopted set of competencies. In 2023, a series of sessions at regional, national, and international medical education conferences were held to generate ideas and collect examples of faculty developer competencies from over 100 stakeholders. More than 500 responses were gathered from prompts such as \"What are the competencies faculty developers need?\" and \"What is one of the most valuable faculty developer competencies today?\" This perspective offers both a rationale for establishing faculty development in medical education as a profession and a path forward through the provision and broad promotion of clearly defined core competencies. The competencies outlined are intended to help inform those entering or currently working in the profession, recruiters of faculty developers, medical education leaders, institutional partners, those conducting performance appraisals of faculty developers, and the broader medical education community. The aim of this work is to generate discussion among vested medical education and faculty development stakeholders who seek to provide structure, clarify roles, and further define medical education faculty development as a profession.</p>","PeriodicalId":50929,"journal":{"name":"Academic Medicine","volume":" ","pages":""},"PeriodicalIF":5.2,"publicationDate":"2025-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146167856","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
John R Raymond, Laura C Michaelis, Adrienne B Mitchell, Michael DeBisschop, Elizabeth M Drew, Christopher Fadumiye, Natalie Fleury, Kelly Horton, Daniel Hughes, Thomas Mier, Abhijai Singh, Elizabeth Sweeny, Alicia Witten, Heather Carroll, M Tracy Zundel, Cheryl A Maurana
Academic health centers (AHCs) face unique challenges regarding freedom of expression, a topic that recently has caused considerable public controversy at universities across the United States. Little has been published on institutional responses to these controversies. To inspire conversation, reflection, and policy development at other AHCs, in this article, the authors outline steps taken by the Medical College of Wisconsin (MCW) in 2023 and 2024 to respond to an immediate challenge to freedom of expression and to develop a long-term response that will support the institution's commitment to freedom of expression as an essential element of education, patient care, and scientific inquiry. The authors describe the work of the MCW Committee on Freedom of Expression, including its formation, the steps it took to develop guiding principles, and the reaction of faculty, staff, and particularly medical students to initial educational programming around applying the newly developed principles. As a private institution, MCW has greater legal latitude than public AHCs, which allowed the committee to engage leaders and stakeholders in a reflective process, asking questions about the institution's position as a "public square," how to best address the needs of institutional and clinical partners, and the impact of power and inequity on broad protections of freedom of expression. To date, among stakeholders, students have been the most hesitant to embrace the new principles and consistently have shared concerns. Feedback from students has demonstrated that self-censorship is widespread, social justice concerns are a high priority, and structured programming can support and scaffold constructive conversations. The authors conclude that providing opportunities for engagement with institutional freedom of expression principles, especially when uncomfortable, is an essential step in integrating them into educational and clinical practices.
{"title":"One academic health center's response to a freedom of expression controversy.","authors":"John R Raymond, Laura C Michaelis, Adrienne B Mitchell, Michael DeBisschop, Elizabeth M Drew, Christopher Fadumiye, Natalie Fleury, Kelly Horton, Daniel Hughes, Thomas Mier, Abhijai Singh, Elizabeth Sweeny, Alicia Witten, Heather Carroll, M Tracy Zundel, Cheryl A Maurana","doi":"10.1093/acamed/wvaf007","DOIUrl":"https://doi.org/10.1093/acamed/wvaf007","url":null,"abstract":"<p><p>Academic health centers (AHCs) face unique challenges regarding freedom of expression, a topic that recently has caused considerable public controversy at universities across the United States. Little has been published on institutional responses to these controversies. To inspire conversation, reflection, and policy development at other AHCs, in this article, the authors outline steps taken by the Medical College of Wisconsin (MCW) in 2023 and 2024 to respond to an immediate challenge to freedom of expression and to develop a long-term response that will support the institution's commitment to freedom of expression as an essential element of education, patient care, and scientific inquiry. The authors describe the work of the MCW Committee on Freedom of Expression, including its formation, the steps it took to develop guiding principles, and the reaction of faculty, staff, and particularly medical students to initial educational programming around applying the newly developed principles. As a private institution, MCW has greater legal latitude than public AHCs, which allowed the committee to engage leaders and stakeholders in a reflective process, asking questions about the institution's position as a \"public square,\" how to best address the needs of institutional and clinical partners, and the impact of power and inequity on broad protections of freedom of expression. To date, among stakeholders, students have been the most hesitant to embrace the new principles and consistently have shared concerns. Feedback from students has demonstrated that self-censorship is widespread, social justice concerns are a high priority, and structured programming can support and scaffold constructive conversations. The authors conclude that providing opportunities for engagement with institutional freedom of expression principles, especially when uncomfortable, is an essential step in integrating them into educational and clinical practices.</p>","PeriodicalId":50929,"journal":{"name":"Academic Medicine","volume":" ","pages":""},"PeriodicalIF":5.2,"publicationDate":"2025-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146068276","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"What premedical students need to succeed: updated premed competencies for entering medical students.","authors":"Gautam Krishna Koipallil, Meghana Reddy, Elimarys Perez-Colon","doi":"10.1093/acamed/wvaf069","DOIUrl":"https://doi.org/10.1093/acamed/wvaf069","url":null,"abstract":"","PeriodicalId":50929,"journal":{"name":"Academic Medicine","volume":" ","pages":""},"PeriodicalIF":5.2,"publicationDate":"2025-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146183289","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Using the past to explain the present: understanding tiered grading in medical education.","authors":"James F Smith, Nicole M Piemonte","doi":"10.1093/acamed/wvaf015","DOIUrl":"https://doi.org/10.1093/acamed/wvaf015","url":null,"abstract":"","PeriodicalId":50929,"journal":{"name":"Academic Medicine","volume":" ","pages":""},"PeriodicalIF":5.2,"publicationDate":"2025-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145953775","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Young-Min Kim, Young-Mee Lee, Do-Hwan Kim, Suyoun Kim, Ji-Hoon Kim, Hye Rim Jin, Chang-Jin Choi
Problem: Although the use of artificial intelligence (AI) as a diagnostic aid is increasing in clinical practice, medical education provides little training on how to incorporate AI-generated information into diagnosis and use it effectively in shared decision-making (SDM) with patients.
Approach: The authors developed and piloted a simulation-based course to train AI-assisted SDM to final-year medical students preparing for residency. Conducted between June and October 2023, the course combined online prelearning with onsite simulations using clinically approved AI tools (Lunit INSIGHT CXR, version 3.1.4.1 and MMG, version 1.1.4.3; Lunit Inc, Seoul, South Korea; used November 16 and 27, 2023). Scenarios portrayed asymptomatic patients with incidental findings (eg, pulmonary nodules, breast microcalcifications). Students engaged in two 12-minute simulated patient encounters featuring SDM with 2 management options. Sessions concluded with simulated patient-written feedback and expert-facilitated debriefing. Twenty-seven students from 3 medical schools participated.
Outcomes: Program evaluation showed significant improvements in participants' comprehension and confidence in SDM (t = 6.51 and t = 7.56, P < .001, respectively) and AI-assisted SDM (t = 5.72 and t = 5.80, P < .001, respectively). Students found AI tools helpful for facilitating SDM and patient communication. Thematic analysis of interviews highlighted strengths, such as structured course design and reflective debriefing. Participants noted that prior education focused on diagnostic algorithms, whereas this course emphasized patient communication and preference-based decisions. They found AI tools useful for diagnosis and supporting discussion with patients through visual outputs. However, they identified limitations, including their own clinical knowledge gaps and lack of explainability in AI tool shortage. They suggested integrating SDM and AI-assisted diagnosis training into formal curricula to better prepare students for clinical practice.
Next steps: Future efforts should focus on integrating this course into undergraduate curricula or transition training programs to provide experiential learning opportunities in AI-assisted clinical practice.
问题:尽管在临床实践中越来越多地使用人工智能(AI)作为诊断辅助手段,但医学教育几乎没有提供关于如何将人工智能生成的信息纳入诊断并有效地将其用于与患者共同决策(SDM)的培训。方法:作者开发并试点了一门基于模拟的课程,为准备住院医师的最后一年级医学生培训人工智能辅助SDM。该课程于2023年6月至10月进行,使用临床批准的人工智能工具(Lunit INSIGHT CXR,版本3.1.4.1和MMG,版本1.1.4.3;Lunit Inc,韩国首尔;于2023年11月16日至27日使用),将在线预学习与现场模拟相结合。场景描述无症状患者附带发现(如肺结节,乳房微钙化)。学生们参与了两次12分钟的模拟患者接触,其中包括SDM和两种管理选择。会议以模拟患者书面反馈和专家指导的汇报结束。来自3所医学院的27名学生参与了研究。结果:项目评估显示参与者对SDM的理解和信心(t = 6.51和t = 7.56, P < 0.001)和ai辅助SDM (t = 5.72和t = 5.80, P < 0.001)有显著改善。学生发现人工智能工具有助于促进SDM和患者沟通。访谈的专题分析突出了优势,如结构化的课程设计和反思性汇报。参与者注意到先前的教育侧重于诊断算法,而本课程强调患者沟通和基于偏好的决策。他们发现人工智能工具有助于诊断,并通过视觉输出支持与患者的讨论。然而,他们发现了局限性,包括他们自己的临床知识差距和人工智能工具短缺缺乏可解释性。他们建议将SDM和人工智能辅助诊断培训纳入正式课程,以更好地为学生的临床实践做好准备。下一步:未来的努力应该集中在将这门课程整合到本科课程或过渡培训计划中,以提供人工智能辅助临床实践的体验学习机会。
{"title":"Artificial intelligence-assisted shared decision-making training for medical students transitioning to residency.","authors":"Young-Min Kim, Young-Mee Lee, Do-Hwan Kim, Suyoun Kim, Ji-Hoon Kim, Hye Rim Jin, Chang-Jin Choi","doi":"10.1093/acamed/wvaf006","DOIUrl":"https://doi.org/10.1093/acamed/wvaf006","url":null,"abstract":"<p><strong>Problem: </strong>Although the use of artificial intelligence (AI) as a diagnostic aid is increasing in clinical practice, medical education provides little training on how to incorporate AI-generated information into diagnosis and use it effectively in shared decision-making (SDM) with patients.</p><p><strong>Approach: </strong>The authors developed and piloted a simulation-based course to train AI-assisted SDM to final-year medical students preparing for residency. Conducted between June and October 2023, the course combined online prelearning with onsite simulations using clinically approved AI tools (Lunit INSIGHT CXR, version 3.1.4.1 and MMG, version 1.1.4.3; Lunit Inc, Seoul, South Korea; used November 16 and 27, 2023). Scenarios portrayed asymptomatic patients with incidental findings (eg, pulmonary nodules, breast microcalcifications). Students engaged in two 12-minute simulated patient encounters featuring SDM with 2 management options. Sessions concluded with simulated patient-written feedback and expert-facilitated debriefing. Twenty-seven students from 3 medical schools participated.</p><p><strong>Outcomes: </strong>Program evaluation showed significant improvements in participants' comprehension and confidence in SDM (t = 6.51 and t = 7.56, P < .001, respectively) and AI-assisted SDM (t = 5.72 and t = 5.80, P < .001, respectively). Students found AI tools helpful for facilitating SDM and patient communication. Thematic analysis of interviews highlighted strengths, such as structured course design and reflective debriefing. Participants noted that prior education focused on diagnostic algorithms, whereas this course emphasized patient communication and preference-based decisions. They found AI tools useful for diagnosis and supporting discussion with patients through visual outputs. However, they identified limitations, including their own clinical knowledge gaps and lack of explainability in AI tool shortage. They suggested integrating SDM and AI-assisted diagnosis training into formal curricula to better prepare students for clinical practice.</p><p><strong>Next steps: </strong>Future efforts should focus on integrating this course into undergraduate curricula or transition training programs to provide experiential learning opportunities in AI-assisted clinical practice.</p>","PeriodicalId":50929,"journal":{"name":"Academic Medicine","volume":" ","pages":""},"PeriodicalIF":5.2,"publicationDate":"2025-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145985867","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Gregory M Ow, Geoffrey V Stetson, Joseph A Costello, Anthony R Artino, Lauren A Maggio
<p><strong>Problem: </strong>Medical education scholars struggle to join ongoing conversations in their field due to the lack of a dedicated medical education corpus. Without such a corpus, scholars must search too widely across thousands of irrelevant journals or too narrowly by relying on PubMed's Medical Subject Headings (MeSH). In tests conducted for this study, MeSH missed 34% of medical education articles.</p><p><strong>Approach: </strong>From January to December 2024, the authors developed the Medical Education Corpus (MEC), the first dedicated collection of medical education articles, through a 3-step process. First, using the core-periphery model, they created the Medical Education Journals (MEJ), a collection of 2 groups of journals based on participation and influence in medical education discourse: the MEJ-Core (formerly the MEJ-24, 24 journals) and the MEJ-Adjacent (127 journals). Second, they developed and evaluated a machine learning model, the MEC Classifier, trained on 4,032 manually labeled articles to identify medical education content. Third, they applied the MEC Classifier to extract medical education articles from the MEJ-Core and MEJ-Adjacent journals.</p><p><strong>Outcomes: </strong>As of December 2024, the MEC contained 119,137 medical education articles from the MEJ-Core (54,927 articles) and MEJ-Adjacent journals (64,210 articles). In an evaluation using 1,358 test articles, the MEC Classifier demonstrated significantly improved sensitivity compared with MeSH (90% vs 66%, P = .001), while maintaining a similar positive predictive value (82% vs 81%).</p><p><strong>Next steps: </strong>The MEC provides a focused corpus that enables medical education scholars to more easily join conversations in the field. Scholars can rely on the MEC when reviewing literature to frame their work, and the MEC also creates opportunities for field-wide analyses and meta-research. The core methodology also underlies the MedEdMentor Paper Database (mededmentor.org), a separately maintained online tool that complements the versioned MEC snapshot with a web-based search interface.Teaser text: Medical education scholars often struggle to effectively "join the conversation" because relevant literature is buried within biomedical databases like PubMed or general academic search engines like Google Scholar. This article introduces the Medical Education Corpus (MEC), a dedicated collection of 119,137 medical education articles curated using a specialized machine-learning classifier. In head-to-head testing, the MEC significantly outperformed PubMed's MeSH terms, capturing 90% of medical education articles compared with MeSH's 66%. By assembling these articles into a single, focused dataset, the MEC allows scholars to more easily find the literature they need to frame their work. The core methodology also underlies MedEdMentor, a separately maintained online tool that makes these optimized searches accessible to the wider medical education community.
{"title":"Joining the conversation: introducing a dedicated medical education corpus.","authors":"Gregory M Ow, Geoffrey V Stetson, Joseph A Costello, Anthony R Artino, Lauren A Maggio","doi":"10.1093/acamed/wvaf008","DOIUrl":"https://doi.org/10.1093/acamed/wvaf008","url":null,"abstract":"<p><strong>Problem: </strong>Medical education scholars struggle to join ongoing conversations in their field due to the lack of a dedicated medical education corpus. Without such a corpus, scholars must search too widely across thousands of irrelevant journals or too narrowly by relying on PubMed's Medical Subject Headings (MeSH). In tests conducted for this study, MeSH missed 34% of medical education articles.</p><p><strong>Approach: </strong>From January to December 2024, the authors developed the Medical Education Corpus (MEC), the first dedicated collection of medical education articles, through a 3-step process. First, using the core-periphery model, they created the Medical Education Journals (MEJ), a collection of 2 groups of journals based on participation and influence in medical education discourse: the MEJ-Core (formerly the MEJ-24, 24 journals) and the MEJ-Adjacent (127 journals). Second, they developed and evaluated a machine learning model, the MEC Classifier, trained on 4,032 manually labeled articles to identify medical education content. Third, they applied the MEC Classifier to extract medical education articles from the MEJ-Core and MEJ-Adjacent journals.</p><p><strong>Outcomes: </strong>As of December 2024, the MEC contained 119,137 medical education articles from the MEJ-Core (54,927 articles) and MEJ-Adjacent journals (64,210 articles). In an evaluation using 1,358 test articles, the MEC Classifier demonstrated significantly improved sensitivity compared with MeSH (90% vs 66%, P = .001), while maintaining a similar positive predictive value (82% vs 81%).</p><p><strong>Next steps: </strong>The MEC provides a focused corpus that enables medical education scholars to more easily join conversations in the field. Scholars can rely on the MEC when reviewing literature to frame their work, and the MEC also creates opportunities for field-wide analyses and meta-research. The core methodology also underlies the MedEdMentor Paper Database (mededmentor.org), a separately maintained online tool that complements the versioned MEC snapshot with a web-based search interface.Teaser text: Medical education scholars often struggle to effectively \"join the conversation\" because relevant literature is buried within biomedical databases like PubMed or general academic search engines like Google Scholar. This article introduces the Medical Education Corpus (MEC), a dedicated collection of 119,137 medical education articles curated using a specialized machine-learning classifier. In head-to-head testing, the MEC significantly outperformed PubMed's MeSH terms, capturing 90% of medical education articles compared with MeSH's 66%. By assembling these articles into a single, focused dataset, the MEC allows scholars to more easily find the literature they need to frame their work. The core methodology also underlies MedEdMentor, a separately maintained online tool that makes these optimized searches accessible to the wider medical education community.","PeriodicalId":50929,"journal":{"name":"Academic Medicine","volume":" ","pages":""},"PeriodicalIF":5.2,"publicationDate":"2025-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145960721","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Purpose: Shame is a deeply personal, complex, and underexplored emotion in medical training; however, how medical learners engage with shame (ie, how they process, recover from, and/or address shame) and the environmental factors affecting this process are currently unknown. This study used hermeneutic phenomenology to explore how medical learners (ie, resident physicians and medical students) engage with shame once it has occurred and the factors that influence this engagement.
Method: This study, which is part of a qualitative research program addressing shame in medical learners, used data collected from 12 residents (2016 and 2017) and 16 medical students (2018) from a residency program and a medical school in the United States. Data collection occurred via semistructured interviews during which participants reflected on shame experiences, including their engagement with it. The authors selected 14 transcripts (7 medical students, 7 residents) to achieve a range of shame experiences and impacts. Data were analyzed using Ajjawi and Higgs' 6 steps of hermeneutic analysis.
Results: Internal scaffolding (thought processes, self-evaluative tendencies, and position relative to others that informed participants' self-concept) was central to shame engagement. Learners' internal scaffoldings shaped and were shaped by distressing shame-integrating engagement (ie, hiding the self, deflecting shame, and transferring shame) and constructive shame-disintegrating engagement (ie, orienting toward others, exerting agency over self-evaluation, and reorienting to a core sense of self). Learning environments influenced shame engagement; environmental values that promoted shame-disintegrating engagement included learner centeredness, inclusivity, vulnerability, and respect.
Conclusions: Although struggle in medical training is inevitable, how learners respond to the shame that can follow is not. The divergent nature of shame engagement highlights the importance of learner agency and environmental response to shame. The authors provide specific suggestions for learners, faculty, and leaders to advance constructive shame engagement and the growth, connection, and belonging it can inspire.
{"title":"Seeking stabilization: how medical learners engage with shame during training.","authors":"Anna V Kulawiec, Luna Dolezal, William E Bynum","doi":"10.1093/acamed/wvaf029","DOIUrl":"https://doi.org/10.1093/acamed/wvaf029","url":null,"abstract":"<p><strong>Purpose: </strong>Shame is a deeply personal, complex, and underexplored emotion in medical training; however, how medical learners engage with shame (ie, how they process, recover from, and/or address shame) and the environmental factors affecting this process are currently unknown. This study used hermeneutic phenomenology to explore how medical learners (ie, resident physicians and medical students) engage with shame once it has occurred and the factors that influence this engagement.</p><p><strong>Method: </strong>This study, which is part of a qualitative research program addressing shame in medical learners, used data collected from 12 residents (2016 and 2017) and 16 medical students (2018) from a residency program and a medical school in the United States. Data collection occurred via semistructured interviews during which participants reflected on shame experiences, including their engagement with it. The authors selected 14 transcripts (7 medical students, 7 residents) to achieve a range of shame experiences and impacts. Data were analyzed using Ajjawi and Higgs' 6 steps of hermeneutic analysis.</p><p><strong>Results: </strong>Internal scaffolding (thought processes, self-evaluative tendencies, and position relative to others that informed participants' self-concept) was central to shame engagement. Learners' internal scaffoldings shaped and were shaped by distressing shame-integrating engagement (ie, hiding the self, deflecting shame, and transferring shame) and constructive shame-disintegrating engagement (ie, orienting toward others, exerting agency over self-evaluation, and reorienting to a core sense of self). Learning environments influenced shame engagement; environmental values that promoted shame-disintegrating engagement included learner centeredness, inclusivity, vulnerability, and respect.</p><p><strong>Conclusions: </strong>Although struggle in medical training is inevitable, how learners respond to the shame that can follow is not. The divergent nature of shame engagement highlights the importance of learner agency and environmental response to shame. The authors provide specific suggestions for learners, faculty, and leaders to advance constructive shame engagement and the growth, connection, and belonging it can inspire.</p>","PeriodicalId":50929,"journal":{"name":"Academic Medicine","volume":" ","pages":""},"PeriodicalIF":5.2,"publicationDate":"2025-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145991774","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Daniel J Schumacher, Daniel J Sklansky, Brian Rissmiller, Lynn Thoreson, Linda A Waggoner-Fountain, Rajat Pareek, Sue E Poynter, Ariel S Winn, Catherine Michelson, Benjamin Kinnear, David A Turner, Leah S Millstein, Jennifer R Di Rocco, Kelsie Avants, Joanna Lewis, Pavan Srivastava, Erin L Giudice, Michelle Arandes, Sylvia Yeh, Alan Schwartz, Daniel J Schumacher, Daniel J Sklansky, Brian Rissmiller, Lynn Thoreson, Linda A Waggoner-Fountain, Rajat Pareek, Sue E Poynter, Ariel S Winn, Catherine Michelson, Benjamin Kinnear, David A Turner, Leah S Millstein, Jennifer R Di Rocco, Kelsie Avants, Joanna Lewis, Pavan Srivastava, Erin L Giudice, Michelle Arandes, Sylvia Yeh, Alan Schwartz
Purpose: Entrustable professional activities (EPAs) detail essential activities within a given specialty. Although 17 general pediatrics EPAs have been defined, it is not known how many are needed to make high-reliability overall entrustment decisions about resident readiness for practice at the time of graduation and initial certification. This study sought to determine how many general pediatrics EPAs are needed.
Method: During the 2021 to 2022, 2022 to 2023, and 2023 to 2024 academic years, the authors collected entrustment-supervision levels, determined by clinical competency committees biannually, for the 17 general pediatrics EPAs for residents at 48 U.S. pediatric residency training programs. Midyear reports were collected between November and January of each year, and end-of-year reports were collected between May and July. The authors conducted generalizability and decision studies to determine the number of EPAs needed to make a reliable overall entrustment decision.
Results: A total of 166,077 individual entrustment-supervision levels were collected for 4,250 pediatric residents across the 17 general pediatrics EPAs. Across all data reporting cycles, the authors found that assessing 6 EPAs yields a generalizability coefficient of 0.8 and assessing 12 EPAs yields a generalizability coefficient of 0.9. However, results differed for midyear compared with end-of-year data collection timepoints as well as by postgraduate year. At graduation, 9 to 13 EPAs are needed to make a highly reliable (generalizability coefficient of 0.9) overall decision about degree of entrustment for unsupervised practice.
Conclusions: This study provides rich insight into the number of EPAs needed to make reliable entrustment decisions about resident readiness to provide patient care. Although readiness can be determined with as few as 9 general pediatrics EPAs (an assessment task), more may be needed to inform a comprehensive curriculum that ensures focus in all areas important to developing general pediatricians during residency training (a curricular task).Teaser text: This study sought to determine how many entrustable professional activities are necessary to make high reliability overall entrustment decisions about pediatric resident readiness for unsupervised practice.
{"title":"Use of entrustable professional activities for reliable overall entrustment decisions.","authors":"Daniel J Schumacher, Daniel J Sklansky, Brian Rissmiller, Lynn Thoreson, Linda A Waggoner-Fountain, Rajat Pareek, Sue E Poynter, Ariel S Winn, Catherine Michelson, Benjamin Kinnear, David A Turner, Leah S Millstein, Jennifer R Di Rocco, Kelsie Avants, Joanna Lewis, Pavan Srivastava, Erin L Giudice, Michelle Arandes, Sylvia Yeh, Alan Schwartz, Daniel J Schumacher, Daniel J Sklansky, Brian Rissmiller, Lynn Thoreson, Linda A Waggoner-Fountain, Rajat Pareek, Sue E Poynter, Ariel S Winn, Catherine Michelson, Benjamin Kinnear, David A Turner, Leah S Millstein, Jennifer R Di Rocco, Kelsie Avants, Joanna Lewis, Pavan Srivastava, Erin L Giudice, Michelle Arandes, Sylvia Yeh, Alan Schwartz","doi":"10.1093/acamed/wvaf001","DOIUrl":"https://doi.org/10.1093/acamed/wvaf001","url":null,"abstract":"<p><strong>Purpose: </strong>Entrustable professional activities (EPAs) detail essential activities within a given specialty. Although 17 general pediatrics EPAs have been defined, it is not known how many are needed to make high-reliability overall entrustment decisions about resident readiness for practice at the time of graduation and initial certification. This study sought to determine how many general pediatrics EPAs are needed.</p><p><strong>Method: </strong>During the 2021 to 2022, 2022 to 2023, and 2023 to 2024 academic years, the authors collected entrustment-supervision levels, determined by clinical competency committees biannually, for the 17 general pediatrics EPAs for residents at 48 U.S. pediatric residency training programs. Midyear reports were collected between November and January of each year, and end-of-year reports were collected between May and July. The authors conducted generalizability and decision studies to determine the number of EPAs needed to make a reliable overall entrustment decision.</p><p><strong>Results: </strong>A total of 166,077 individual entrustment-supervision levels were collected for 4,250 pediatric residents across the 17 general pediatrics EPAs. Across all data reporting cycles, the authors found that assessing 6 EPAs yields a generalizability coefficient of 0.8 and assessing 12 EPAs yields a generalizability coefficient of 0.9. However, results differed for midyear compared with end-of-year data collection timepoints as well as by postgraduate year. At graduation, 9 to 13 EPAs are needed to make a highly reliable (generalizability coefficient of 0.9) overall decision about degree of entrustment for unsupervised practice.</p><p><strong>Conclusions: </strong>This study provides rich insight into the number of EPAs needed to make reliable entrustment decisions about resident readiness to provide patient care. Although readiness can be determined with as few as 9 general pediatrics EPAs (an assessment task), more may be needed to inform a comprehensive curriculum that ensures focus in all areas important to developing general pediatricians during residency training (a curricular task).Teaser text: This study sought to determine how many entrustable professional activities are necessary to make high reliability overall entrustment decisions about pediatric resident readiness for unsupervised practice.</p>","PeriodicalId":50929,"journal":{"name":"Academic Medicine","volume":" ","pages":""},"PeriodicalIF":5.2,"publicationDate":"2025-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145991816","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Psychological safety and stress in the learning environment.","authors":"Alaina B Mui, Timothy D Bradley, Erika S Abel","doi":"10.1093/acamed/wvaf040","DOIUrl":"https://doi.org/10.1093/acamed/wvaf040","url":null,"abstract":"","PeriodicalId":50929,"journal":{"name":"Academic Medicine","volume":" ","pages":""},"PeriodicalIF":5.2,"publicationDate":"2025-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146120965","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}