Background: Feedback is a powerful educational intervention in clinical education, yet its effectiveness depends on how it is integrated into teaching and learning activities. Previous studies have shown that productive feedback in clinical education relies on sociocultural factors such as a supportive feedback culture, trustworthy relationships, and student agency. Co-creation is a promising approach for designing educational interventions that are contextually relevant and aligned with the needs of teachers and students. This study aimed to advance both theoretical and practical understanding of co-creation as a design strategy in health professions education, particularly in developing productive feedback processes tailored to undergraduate clinical education.
Materials and methods: Eight co-creation sessions were conducted with faculty, clinical teachers, students, and researchers. The process was iterative and grounded in feedback design principles informed by the literature. Co-creation led to the development of a prototype of a Feedback Toolkit, which was piloted in two dyads of clinical-teacher students in a seven-week physiotherapy clerkship. Weekly audio diaries were collected from participants and analyzed using content analysis.
Results: Data from the co-creation sessions informed the development of a Feedback Toolkit specifically designed for the clinical teacher-student dyad. The toolkit was built upon three design principles: (1) Contributes to a trustful relationship based on continuous mutual support, (2) Envisioned learning opportunities and feedback scaffolding, and (3) Plan the use of feedback. To operationalize these principles, the toolkit included practical materials such as podcasts, infographics, feedback prompts, and a Mini-CEX. The pilot study demonstrated the toolkit's usability and acceptability and highlighted its value in structuring feedback interactions. Challenges included limited time for full implementation and difficulties in providing constructive feedback.
Conclusion: The co-creation approach enabled the development of a fit-for-purpose feedback toolkit that aligns with the dynamic needs of clinical education. This study highlights co-creation as a feasible strategy for designing feedback processes in workplace-based learning.
{"title":"Co-creation of a fit-for-purpose Feedback Toolkit for clinical clerkships.","authors":"Javiera Fuentes-Cimma, Dominique Sluijsmans, Francisca Rammsy, Ignacio Villagran, Lorena Isbej, Arnoldo Riquelme-Perez, Sylvia Heeneman","doi":"10.1080/0142159X.2026.2634062","DOIUrl":"https://doi.org/10.1080/0142159X.2026.2634062","url":null,"abstract":"<p><strong>Background: </strong>Feedback is a powerful educational intervention in clinical education, yet its effectiveness depends on how it is integrated into teaching and learning activities. Previous studies have shown that productive feedback in clinical education relies on sociocultural factors such as a supportive feedback culture, trustworthy relationships, and student agency. Co-creation is a promising approach for designing educational interventions that are contextually relevant and aligned with the needs of teachers and students. This study aimed to advance both theoretical and practical understanding of co-creation as a design strategy in health professions education, particularly in developing productive feedback processes tailored to undergraduate clinical education.</p><p><strong>Materials and methods: </strong>Eight co-creation sessions were conducted with faculty, clinical teachers, students, and researchers. The process was iterative and grounded in feedback design principles informed by the literature. Co-creation led to the development of a prototype of a Feedback Toolkit, which was piloted in two dyads of clinical-teacher students in a seven-week physiotherapy clerkship. Weekly audio diaries were collected from participants and analyzed using content analysis.</p><p><strong>Results: </strong>Data from the co-creation sessions informed the development of a Feedback Toolkit specifically designed for the clinical teacher-student dyad. The toolkit was built upon three design principles: (1) Contributes to a trustful relationship based on continuous mutual support, (2) Envisioned learning opportunities and feedback scaffolding, and (3) Plan the use of feedback. To operationalize these principles, the toolkit included practical materials such as podcasts, infographics, feedback prompts, and a Mini-CEX. The pilot study demonstrated the toolkit's usability and acceptability and highlighted its value in structuring feedback interactions. Challenges included limited time for full implementation and difficulties in providing constructive feedback.</p><p><strong>Conclusion: </strong>The co-creation approach enabled the development of a fit-for-purpose feedback toolkit that aligns with the dynamic needs of clinical education. This study highlights co-creation as a feasible strategy for designing feedback processes in workplace-based learning.</p>","PeriodicalId":18643,"journal":{"name":"Medical Teacher","volume":" ","pages":"1-12"},"PeriodicalIF":3.3,"publicationDate":"2026-03-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147355783","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-03-01Epub Date: 2025-09-20DOI: 10.1080/0142159X.2025.2559921
Zohrehsadat Mirmoghtadaie
{"title":"From COVID to code: tracing recent AMEE themes-from global health crisis to the emergence of AI in medical education.","authors":"Zohrehsadat Mirmoghtadaie","doi":"10.1080/0142159X.2025.2559921","DOIUrl":"10.1080/0142159X.2025.2559921","url":null,"abstract":"","PeriodicalId":18643,"journal":{"name":"Medical Teacher","volume":" ","pages":"520"},"PeriodicalIF":3.3,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145102751","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-03-01Epub Date: 2025-10-06DOI: 10.1080/0142159X.2025.2566967
Holly A Caretta-Weyer, Lalena M Yarris
Introduction: The advent of competency-based education has led to concerns regarding reductionism in the assessment of clinical competence. This apprehension stems from using the assessment of isolated subunit competencies to build a complete picture of clinical competence. Some argue that the entrustable professional activity (EPA) framework complements the construct of competencies, as EPAs describe units of work and require a global approach to their assessment. To that end, we aimed to discern whether the assessment of separate subunit competencies subsequently aggregated is equivalent to the global assessment of EPAs.
Methods: We designed a simulation-based workshop and assessed each student using the subunit competencies mapped to the core EPAs (the bottom-up approach) compared to the assessment of the global EPAs (the top-down approach) using 1) a supervision scale, 2) a global statement regarding entrustment and 3) a statement regarding readiness for residency. We aimed to determine whether the global assessment of EPAs was equivalent to aggregating the corresponding subunit competency assessments. The subunit competency assessments were additionally compared to aggregate workplace-based assessment data on the various subunit competencies from core clerkships.
Results: All eligible students participated (136/136). Assessment data obtained using the subunit competencies mapped to the EPAs were highly correlated with the assessment of subunit competencies obtained in the workplace during core clerkships. However, these subunit competency assessments obtained during the TTR course did not correlate with EPA-based global supervision scale ratings, entrustment decisions, or perceived readiness for residency.
Discussion: Global assessment of EPAs and the judgment of entrustment appear to be separate processes from aggregating the assessment of subunit competencies. This may reflect variations in the approach to global assessment when compared to the assessment of subunit competencies and the need to consider the construct of trustworthiness in addition to the learner's ability to perform each activity.
{"title":"Discordance between global versus reductionist approach in competency-based assessment for medical students in a transition to residency course.","authors":"Holly A Caretta-Weyer, Lalena M Yarris","doi":"10.1080/0142159X.2025.2566967","DOIUrl":"10.1080/0142159X.2025.2566967","url":null,"abstract":"<p><strong>Introduction: </strong>The advent of competency-based education has led to concerns regarding reductionism in the assessment of clinical competence. This apprehension stems from using the assessment of isolated subunit competencies to build a complete picture of clinical competence. Some argue that the entrustable professional activity (EPA) framework complements the construct of competencies, as EPAs describe units of work and require a global approach to their assessment. To that end, we aimed to discern whether the assessment of separate subunit competencies subsequently aggregated is equivalent to the global assessment of EPAs.</p><p><strong>Methods: </strong>We designed a simulation-based workshop and assessed each student using the subunit competencies mapped to the core EPAs (the bottom-up approach) compared to the assessment of the global EPAs (the top-down approach) using 1) a supervision scale, 2) a global statement regarding entrustment and 3) a statement regarding readiness for residency. We aimed to determine whether the global assessment of EPAs was equivalent to aggregating the corresponding subunit competency assessments. The subunit competency assessments were additionally compared to aggregate workplace-based assessment data on the various subunit competencies from core clerkships.</p><p><strong>Results: </strong>All eligible students participated (136/136). Assessment data obtained using the subunit competencies mapped to the EPAs were highly correlated with the assessment of subunit competencies obtained in the workplace during core clerkships. However, these subunit competency assessments obtained during the TTR course did not correlate with EPA-based global supervision scale ratings, entrustment decisions, or perceived readiness for residency.</p><p><strong>Discussion: </strong>Global assessment of EPAs and the judgment of entrustment appear to be separate processes from aggregating the assessment of subunit competencies. This may reflect variations in the approach to global assessment when compared to the assessment of subunit competencies and the need to consider the construct of trustworthiness in addition to the learner's ability to perform each activity.</p>","PeriodicalId":18643,"journal":{"name":"Medical Teacher","volume":" ","pages":"506-515"},"PeriodicalIF":3.3,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145239159","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-03-01Epub Date: 2025-09-04DOI: 10.1080/0142159X.2025.2556877
Andrew Coggins, Tina Wu, Ishan Tellambura, Sandra Warburton
{"title":"Response to: \"Virtual patients, real conversations: ChatGPT advanced voice mode for pain communication training\".","authors":"Andrew Coggins, Tina Wu, Ishan Tellambura, Sandra Warburton","doi":"10.1080/0142159X.2025.2556877","DOIUrl":"10.1080/0142159X.2025.2556877","url":null,"abstract":"","PeriodicalId":18643,"journal":{"name":"Medical Teacher","volume":" ","pages":"519"},"PeriodicalIF":3.3,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144992608","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-03-01Epub Date: 2025-09-24DOI: 10.1080/0142159X.2025.2561782
Kehoe A, Ellawala A, Karunaratne D, Tiffin P A, Crampton P E S
Introduction: Effective teamwork is essential for the successful functioning of healthcare. Breakdowns in teamwork are frequently flagged as contributing to major patient safety issues. Current research indicates a lack of knowledge regarding key factors that impact upon teamwork and how medical educators can best prepare students. This study explores how doctors work within healthcare teams; exploring barriers and enablers to effective teamworking.
Methods: A realist evaluation was used to understand the contextual influences and subsequent mechanisms that impact teamwork outcomes. Phase 1 included a realist literature review and scoping interviews with key stakeholders (n = 9). Phase 2 included 63 realist interviews representing a wide range of professional groups, roles and demographics across the UK healthcare.
Results: The initial program theory developed in Phase 1 was refined during Phase 2, integrating and extending the dispersed and patchy current evidence on the contexts, mechanisms, and outcomes of teamwork. Enablers included building a positive and supportive culture, effective communication, leaders who are understanding and approachable, clearly defined roles and respect, and continuity and experience of those in newer roles. Barriers included high service demands and work pressures, power imbalances and negative hierarchy, a lack of support for those new to teams and organisations, poor communication, poor leadership, a lack of appreciation and understanding of the needs of differing groups within teams, and finally EDI issues. There were particular difficulties for those in newer roles.
Discussion: We have identified that team dynamics are likely to be hindered by transient teams, lack of support, dysfunctional leadership and communication, and non-approachable colleagues. There are currently clear difficulties in how doctors interact with those in newer roles, and the ways in which team members are integrated into teams. This is the first research to develop a teamworking programme theory that can be used to support educators, institutions and regulators.
{"title":"Effective teamwork within healthcare - Let's finally make it happen! A realist evaluation.","authors":"Kehoe A, Ellawala A, Karunaratne D, Tiffin P A, Crampton P E S","doi":"10.1080/0142159X.2025.2561782","DOIUrl":"10.1080/0142159X.2025.2561782","url":null,"abstract":"<p><strong>Introduction: </strong>Effective teamwork is essential for the successful functioning of healthcare. Breakdowns in teamwork are frequently flagged as contributing to major patient safety issues. Current research indicates a lack of knowledge regarding key factors that impact upon teamwork and how medical educators can best prepare students. This study explores how doctors work within healthcare teams; exploring barriers and enablers to effective teamworking.</p><p><strong>Methods: </strong>A realist evaluation was used to understand the contextual influences and subsequent mechanisms that impact teamwork outcomes. Phase 1 included a realist literature review and scoping interviews with key stakeholders (<i>n</i> = 9). Phase 2 included 63 realist interviews representing a wide range of professional groups, roles and demographics across the UK healthcare.</p><p><strong>Results: </strong>The initial program theory developed in Phase 1 was refined during Phase 2, integrating and extending the dispersed and patchy current evidence on the contexts, mechanisms, and outcomes of teamwork. Enablers included building a positive and supportive culture, effective communication, leaders who are understanding and approachable, clearly defined roles and respect, and continuity and experience of those in newer roles. Barriers included high service demands and work pressures, power imbalances and negative hierarchy, a lack of support for those new to teams and organisations, poor communication, poor leadership, a lack of appreciation and understanding of the needs of differing groups within teams, and finally EDI issues. There were particular difficulties for those in newer roles.</p><p><strong>Discussion: </strong>We have identified that team dynamics are likely to be hindered by transient teams, lack of support, dysfunctional leadership and communication, and non-approachable colleagues. There are currently clear difficulties in how doctors interact with those in newer roles, and the ways in which team members are integrated into teams. This is the first research to develop a teamworking programme theory that can be used to support educators, institutions and regulators.</p>","PeriodicalId":18643,"journal":{"name":"Medical Teacher","volume":" ","pages":"476-492"},"PeriodicalIF":3.3,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145138038","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-03-01Epub Date: 2025-09-20DOI: 10.1080/0142159X.2025.2560578
Yunzhu Ouyang, Qi Guo, Cecilia B Alves, Andrea J Gotzmann, Marguerite Roy, Judy L McCormick
Purpose: In the post-COVID era, recognizing evolving physician competencies is crucial for guiding medical education and test development. This study aimed to extract valuable insights concerning emerging physician competencies from influencers' posts on X, leveraging an AI-driven approach.
Method: Two datasets pertaining to medical competency were analyzed, with posts collected from January 1, 2020, to June 1, 2023. Social network analyses were performed to identify influencers leading medical competency conversations on X. ChatGPT was utilized for textual analyses of influencers' posts to reveal core themes of physician competencies.
Results: Social network analysis revealed that medical professionals played a predominant role in disseminating information on medical competency on X. Textual analysis identified six core themes in the CanMEDS dataset-clinical learning environment, anti-racism, EDI, adaptive expertise, planetary health, and leadership development-and seven in the MedEd dataset-cultural competency, structural competency, assessment models, virtual care, EDI, leadership development, and wellness.
Conclusion: The identified themes emphasize physicians' competencies in addressing health disparities, preparing for real-world challenges, adapting to the evolving healthcare landscape, and leading effectively in diverse healthcare settings. The findings hold significant implications for medical education, test development, and the integration of artificial intelligence in physician competency assessment.
{"title":"Exploring emerging physician competencies: Analyzing insights from medical care influencers on X.","authors":"Yunzhu Ouyang, Qi Guo, Cecilia B Alves, Andrea J Gotzmann, Marguerite Roy, Judy L McCormick","doi":"10.1080/0142159X.2025.2560578","DOIUrl":"10.1080/0142159X.2025.2560578","url":null,"abstract":"<p><strong>Purpose: </strong>In the post-COVID era, recognizing evolving physician competencies is crucial for guiding medical education and test development. This study aimed to extract valuable insights concerning emerging physician competencies from influencers' posts on X, leveraging an AI-driven approach.</p><p><strong>Method: </strong>Two datasets pertaining to medical competency were analyzed, with posts collected from January 1, 2020, to June 1, 2023. Social network analyses were performed to identify influencers leading medical competency conversations on X. ChatGPT was utilized for textual analyses of influencers' posts to reveal core themes of physician competencies.</p><p><strong>Results: </strong>Social network analysis revealed that medical professionals played a predominant role in disseminating information on medical competency on X. Textual analysis identified six core themes in the CanMEDS dataset-clinical learning environment, anti-racism, EDI, adaptive expertise, planetary health, and leadership development-and seven in the MedEd dataset-cultural competency, structural competency, assessment models, virtual care, EDI, leadership development, and wellness.</p><p><strong>Conclusion: </strong>The identified themes emphasize physicians' competencies in addressing health disparities, preparing for real-world challenges, adapting to the evolving healthcare landscape, and leading effectively in diverse healthcare settings. The findings hold significant implications for medical education, test development, and the integration of artificial intelligence in physician competency assessment.</p>","PeriodicalId":18643,"journal":{"name":"Medical Teacher","volume":" ","pages":"444-453"},"PeriodicalIF":3.3,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145102778","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Introduction: Item difficulty prediction is crucial for planning and administrating educational assessments, especially those with high-stakes such as medical licensing examinations. The inconsistent findings across existing studies, however, highlight a critical gap in understanding which modeling components are most influential. This research addresses this gap by systematically investigating several key factors hypothesized to affect prediction performance.
Methods: This study explored the impact of: (1) model domain specificity, (2) input content granularity (e.g. item stem, correct answer, and distractors), (3) embedding dimensionality, and (4) the choice of the machine learning regressor. By selecting a range of embedding models and a series of Machine Learning models to predict the difficulty of 2815 Multiple-Choice Questions sourced from the National Center for Health Professions Education Development.
Results: Analyses revealed that XGBoost outperformed other counterparts (Mean RMSE = 0.1779), and the use of a domain-specific MedEmbed-small embedding model consistently improved prediction accuracy (Mean RMSE = 0.1860). Notably, using the item stem and the correct answer as input features achieved the best trade-off between predictive accuracy and model parsimony (RMSE = 0.1756).
Discussion: These findings offer valuable insights for data-driven measurement practices including Automated Item Calibration, Computerized Adaptive Testing, and Intelligent Tutoring Systems in medical education. Furthermore, this study revealed that the optimal feature set for difficulty prediction is contingent on the item style. Future research should extend this line of inquiry to the difficulty prediction of Multimodal test items. [Box: see text].
{"title":"Medical exam question difficulty prediction: An analysis of embedding representations, machine-learning approaches, and input feature impact.","authors":"Shicong Feng, Tianpeng Zheng, Hao Hang, Jiayi Liu, Zhehan Jiang","doi":"10.1080/0142159X.2025.2586619","DOIUrl":"10.1080/0142159X.2025.2586619","url":null,"abstract":"<p><strong>Introduction: </strong>Item difficulty prediction is crucial for planning and administrating educational assessments, especially those with high-stakes such as medical licensing examinations. The inconsistent findings across existing studies, however, highlight a critical gap in understanding which modeling components are most influential. This research addresses this gap by systematically investigating several key factors hypothesized to affect prediction performance.</p><p><strong>Methods: </strong>This study explored the impact of: (1) model domain specificity, (2) input content granularity (e.g. item stem, correct answer, and distractors), (3) embedding dimensionality, and (4) the choice of the machine learning regressor. By selecting a range of embedding models and a series of Machine Learning models to predict the difficulty of 2815 Multiple-Choice Questions sourced from the National Center for Health Professions Education Development.</p><p><strong>Results: </strong>Analyses revealed that XGBoost outperformed other counterparts (Mean RMSE = 0.1779), and the use of a domain-specific MedEmbed-small embedding model consistently improved prediction accuracy (Mean RMSE = 0.1860). Notably, using the item stem and the correct answer as input features achieved the best trade-off between predictive accuracy and model parsimony (RMSE = 0.1756).</p><p><strong>Discussion: </strong>These findings offer valuable insights for data-driven measurement practices including Automated Item Calibration, Computerized Adaptive Testing, and Intelligent Tutoring Systems in medical education. Furthermore, this study revealed that the optimal feature set for difficulty prediction is contingent on the item style. Future research should extend this line of inquiry to the difficulty prediction of Multimodal test items. [Box: see text].</p>","PeriodicalId":18643,"journal":{"name":"Medical Teacher","volume":" ","pages":"454-466"},"PeriodicalIF":3.3,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145573821","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-03-01Epub Date: 2025-09-18DOI: 10.1080/0142159X.2025.2553627
Minna Ylönen, Verneri Hannula, Teuvo Antikainen, Kristina Mikkonen, Jonna Juntunen, Panu Forsman, Pauliina Aukee, Sami Lehesvuori, Anneli Kuusinen-Laukkala, Raija Hämäläinen, Petri Kulmala
Purpose: A successful mentoring process and relationship require active engagement from both mentor and mentee. This study explored and evaluated the experiences, perceptions and associated factors of mentoring within postgraduate medical education from both mentors' and mentees' perspectives.
Materials and methods: The Mentors' Competence Instrument (MCI) was used to collect data in the three Wellbeing Service Counties in Finland. The cross-sectional survey yielded a total of 154 mentor and 79 mentee responses. Statistical analyses were conducted on the quantitative data, while the qualitative data were analysed using inductive content analysis.
Results: Statistically significant differences between the two groups were observed in Reflection during mentoring, Constructive feedback, and Learner-centred evaluation. The youngest mentees (under 31 years old) received the highest overall evaluations across all MCI sum variables. Areas for improvement were identified by the mentees in the structures and resourcing of mentoring, the quality of the mentoring relationship, the mentoring process, and the pedagogical competence of the mentors.
Conclusion: Mentees tended to evaluate the mentoring they received less positively than mentors assessed their own mentoring competence. Younger mentees appeared to rate their mentoring experience more favorably than older mentees. Mentees highlighted various aspects of mentoring that could benefit from further development.
{"title":"Mentors' and mentees' perspectives on mentoring competence and areas for improvement in postgraduate medical education - A cross-sectional study.","authors":"Minna Ylönen, Verneri Hannula, Teuvo Antikainen, Kristina Mikkonen, Jonna Juntunen, Panu Forsman, Pauliina Aukee, Sami Lehesvuori, Anneli Kuusinen-Laukkala, Raija Hämäläinen, Petri Kulmala","doi":"10.1080/0142159X.2025.2553627","DOIUrl":"10.1080/0142159X.2025.2553627","url":null,"abstract":"<p><strong>Purpose: </strong>A successful mentoring process and relationship require active engagement from both mentor and mentee. This study explored and evaluated the experiences, perceptions and associated factors of mentoring within postgraduate medical education from both mentors' and mentees' perspectives.</p><p><strong>Materials and methods: </strong>The Mentors' Competence Instrument (MCI) was used to collect data in the three Wellbeing Service Counties in Finland. The cross-sectional survey yielded a total of 154 mentor and 79 mentee responses. Statistical analyses were conducted on the quantitative data, while the qualitative data were analysed using inductive content analysis.</p><p><strong>Results: </strong>Statistically significant differences between the two groups were observed in Reflection during mentoring, Constructive feedback, and Learner-centred evaluation. The youngest mentees (under 31 years old) received the highest overall evaluations across all MCI sum variables. Areas for improvement were identified by the mentees in the structures and resourcing of mentoring, the quality of the mentoring relationship, the mentoring process, and the pedagogical competence of the mentors.</p><p><strong>Conclusion: </strong>Mentees tended to evaluate the mentoring they received less positively than mentors assessed their own mentoring competence. Younger mentees appeared to rate their mentoring experience more favorably than older mentees. Mentees highlighted various aspects of mentoring that could benefit from further development.</p>","PeriodicalId":18643,"journal":{"name":"Medical Teacher","volume":" ","pages":"405-414"},"PeriodicalIF":3.3,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145086498","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-03-01Epub Date: 2025-09-29DOI: 10.1080/0142159X.2025.2564869
Emily Rush, Jessica N Byram, Colleen N Garnett, Nicole DeVaul, Laura Smith, Margaret Checchi, Daniel Martin, Leslie A Hoffman, Kirstin M Brown, Daniel J Mumbower, Robert M Becker, Victoria A Roach, Alison F Doubleday, Danielle N Edwards, Rebecca S Lufler, Alexandra Wactor, Sophia Boxerman, Suzanne Smith, Hannah Herriott, Adam B Wilson
Purpose: Medical schools would benefit from systematic guidance for developing comprehensive artificial intelligence (AI) policies, given generative AI's rapid integration into medical education. This study developed and applied an idealized AI policy framework to analyze AI-related documents at U.S. medical school institutions, providing reference points for the development and refinement of institutional policies.
Methods: AI-related documents from institutions with U.S. allopathic and osteopathic medical schools were systematically collected (from August to October 2024) and analyzed using a comprehensive framework containing 24 subthemes across six themes: Background/Context, Governance, AI Literacy, Tools/Usage, Ethical/Legal Considerations, and Technology Support and Infrastructure. Publicly available online documents were systematically coded to generate framework subtheme scores indicating breadth of coverage across framework themes.
Results: AI-related documents retrieved from 73.7% (146/198) of U.S. medical school institutions covered an average of 8 of 24 subthemes, representing a mean framework coverage score of 32.3% ± 19.8 Rarely addressed subthemes included Audit and Compliance Mechanisms (6.8%, 10/146), Technical Infrastructure (6.2%, 9/146), and Environmental Stewardship (1.4%, 2/146). Academic Honesty and Plagiarism dominated AI-related documents (81.5%, 119/146), followed by Decision-Making Authority (54.1%, 79/146) and Critical Evaluation (52.1%, 76/146). Formal AI policies demonstrated significantly higher framework coverage than other AI document types (44.0% vs 30.4%, p = 0.003). Seven institutions with the highest coverage (≥13/24 subthemes) shared seven common distinguishing features, with six present universally.
Conclusions: AI-related documents currently emphasize academic integrity over strategic planning, with substantial gaps in infrastructure and review mechanisms. Institutions can enhance their AI policies by incorporating common features identified in well-designed policies and following frameworks that strike a balance between immediate concerns and long-term adaptability.
目的:鉴于生成式人工智能迅速融入医学教育,医学院将受益于制定综合人工智能(AI)政策的系统指导。本研究开发并应用了一个理想化的人工智能政策框架来分析美国医学院机构的人工智能相关文件,为机构政策的制定和完善提供参考点。方法:系统收集美国对抗疗法和整骨疗法医学院机构的人工智能相关文件(2024年8月至10月),并使用包含6个主题的24个子主题的综合框架进行分析:背景/背景,治理,人工智能素养,工具/使用,道德/法律考虑以及技术支持和基础设施。公开可用的在线文件被系统地编码,以生成框架子主题得分,表明跨框架主题的覆盖广度。结果:从73.7%(146/198)的美国医学院机构检索到的人工智能相关文件平均覆盖了24个子主题中的8个,平均框架覆盖率为32.3%±19.8。很少涉及的子主题包括审计和合规机制(6.8%,10/146)、技术基础设施(6.2%,9/146)和环境管理(1.4%,2/146)。人工智能相关文献以学术诚信和剽窃为主(81.5%,119/146),其次是决策权威(54.1%,79/146)和批判性评价(52.1%,76/146)。正式的人工智能政策比其他人工智能文件类型显示出更高的框架覆盖率(44.0% vs 30.4%, p = 0.003)。覆盖率最高的7个机构(≥13/24个子主题)有7个共同的显著特征,其中6个普遍存在。结论:目前人工智能相关文件强调学术诚信甚于战略规划,基础设施和审查机制存在较大差距。机构可以通过纳入精心设计的政策中确定的共同特征,并遵循在当前关注和长期适应性之间取得平衡的框架,来增强其人工智能政策。
{"title":"An audit of AI-related documents across U.S. medical schools: A framework-based qualitative content analysis.","authors":"Emily Rush, Jessica N Byram, Colleen N Garnett, Nicole DeVaul, Laura Smith, Margaret Checchi, Daniel Martin, Leslie A Hoffman, Kirstin M Brown, Daniel J Mumbower, Robert M Becker, Victoria A Roach, Alison F Doubleday, Danielle N Edwards, Rebecca S Lufler, Alexandra Wactor, Sophia Boxerman, Suzanne Smith, Hannah Herriott, Adam B Wilson","doi":"10.1080/0142159X.2025.2564869","DOIUrl":"10.1080/0142159X.2025.2564869","url":null,"abstract":"<p><strong>Purpose: </strong>Medical schools would benefit from systematic guidance for developing comprehensive artificial intelligence (AI) policies, given generative AI's rapid integration into medical education. This study developed and applied an idealized AI policy framework to analyze AI-related documents at U.S. medical school institutions, providing reference points for the development and refinement of institutional policies.</p><p><strong>Methods: </strong>AI-related documents from institutions with U.S. allopathic and osteopathic medical schools were systematically collected (from August to October 2024) and analyzed using a comprehensive framework containing 24 subthemes across six themes: Background/Context, Governance, AI Literacy, Tools/Usage, Ethical/Legal Considerations, and Technology Support and Infrastructure. Publicly available online documents were systematically coded to generate framework subtheme scores indicating breadth of coverage across framework themes.</p><p><strong>Results: </strong>AI-related documents retrieved from 73.7% (146/198) of U.S. medical school institutions covered an average of 8 of 24 subthemes, representing a mean framework coverage score of 32.3% ± 19.8 Rarely addressed subthemes included Audit and Compliance Mechanisms (6.8%, 10/146), Technical Infrastructure (6.2%, 9/146), and Environmental Stewardship (1.4%, 2/146). Academic Honesty and Plagiarism dominated AI-related documents (81.5%, 119/146), followed by Decision-Making Authority (54.1%, 79/146) and Critical Evaluation (52.1%, 76/146). Formal AI policies demonstrated significantly higher framework coverage than other AI document types (44.0% vs 30.4%, <i>p</i> = 0.003). Seven institutions with the highest coverage (≥13/24 subthemes) shared seven common distinguishing features, with six present universally.</p><p><strong>Conclusions: </strong>AI-related documents currently emphasize academic integrity over strategic planning, with substantial gaps in infrastructure and review mechanisms. Institutions can enhance their AI policies by incorporating common features identified in well-designed policies and following frameworks that strike a balance between immediate concerns and long-term adaptability.</p>","PeriodicalId":18643,"journal":{"name":"Medical Teacher","volume":" ","pages":"493-505"},"PeriodicalIF":3.3,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145192079","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}