Pub Date : 2024-01-22eCollection Date: 2024-01-01DOI: 10.5334/pme.1110
Holly A Caretta-Weyer, Alina Smirnova, Michael A Barone, Jason R Frank, Tina Hernandez-Boussard, Dana Levinson, Kiki M J M H Lombarts, Kimberly D Lomis, Abigail Martini, Daniel J Schumacher, David A Turner, Abigail Schuh
Assessment in medical education has evolved through a sequence of eras each centering on distinct views and values. These eras include measurement (e.g., knowledge exams, objective structured clinical examinations), then judgments (e.g., workplace-based assessments, entrustable professional activities), and most recently systems or programmatic assessment, where over time multiple types and sources of data are collected and combined by competency committees to ensure individual learners are ready to progress to the next stage in their training. Significantly less attention has been paid to the social context of assessment, which has led to an overall erosion of trust in assessment by a variety of stakeholders including learners and frontline assessors. To meaningfully move forward, the authors assert that the reestablishment of trust should be foundational to the next era of assessment. In our actions and interventions, it is imperative that medical education leaders address and build trust in assessment at a systems level. To that end, the authors first review tenets on the social contextualization of assessment and its linkage to trust and discuss consequences should the current state of low trust continue. The authors then posit that trusting and trustworthy relationships can exist at individual as well as organizational and systems levels. Finally, the authors propose a framework to build trust at multiple levels in a future assessment system; one that invites and supports professional and human growth and has the potential to position assessment as a fundamental component of renegotiating the social contract between medical education and the health of the public.
{"title":"The Next Era of Assessment: Building a Trustworthy Assessment System.","authors":"Holly A Caretta-Weyer, Alina Smirnova, Michael A Barone, Jason R Frank, Tina Hernandez-Boussard, Dana Levinson, Kiki M J M H Lombarts, Kimberly D Lomis, Abigail Martini, Daniel J Schumacher, David A Turner, Abigail Schuh","doi":"10.5334/pme.1110","DOIUrl":"10.5334/pme.1110","url":null,"abstract":"<p><p>Assessment in medical education has evolved through a sequence of eras each centering on distinct views and values. These eras include measurement (e.g., knowledge exams, objective structured clinical examinations), then judgments (e.g., workplace-based assessments, entrustable professional activities), and most recently systems or programmatic assessment, where over time multiple types and sources of data are collected and combined by competency committees to ensure individual learners are ready to progress to the next stage in their training. Significantly less attention has been paid to the social context of assessment, which has led to an overall erosion of trust in assessment by a variety of stakeholders including learners and frontline assessors. To meaningfully move forward, the authors assert that the reestablishment of trust should be foundational to the next era of assessment. In our actions and interventions, it is imperative that medical education leaders address and build trust in assessment at a systems level. To that end, the authors first review tenets on the social contextualization of assessment and its linkage to trust and discuss consequences should the current state of low trust continue. The authors then posit that trusting and trustworthy relationships can exist at individual as well as organizational and systems levels. Finally, the authors propose a framework to build trust at multiple levels in a future assessment system; one that invites and supports professional and human growth and has the potential to position assessment as a fundamental component of renegotiating the social contract between medical education and the health of the public.</p>","PeriodicalId":48532,"journal":{"name":"Perspectives on Medical Education","volume":"13 1","pages":"12-23"},"PeriodicalIF":4.8,"publicationDate":"2024-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10809864/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139565140","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-05eCollection Date: 2024-01-01DOI: 10.5334/pme.1029
Jenny McDonald, Wendy Hu, Sylvia Heeneman
Introduction: Portfolios scaffold reflection on experience so students can plan their learning. To elicit reflection, the learning experiences documented in portfolios must be meaningful. To understand what experiences first- and second-year medical students find meaningful, we studied the patterns in the artefacts chosen for portfolios and their associated written reflections.
Methods: This explanatory mixed methods study of a longitudinal dataset of 835 artefacts from 37 medical student' portfolios, identified patterns in artefact types over time. Mixed model logistic regression analysis identified time, student and curriculum factors associated with inclusion of the most common types of artefacts. Thematic analysis of participants' reflections about their artefacts provided insight into their choices. Interpretation of the integrated findings was informed by Transformative Learning (TL) theory.
Results: Artefact choices changed over time, influenced by curriculum changes and personal factors. In first year, the most common types of artefacts were Problem Based Learning mechanism diagrams and group photos representing classwork; in second year written assignments and 'selfies' representing social and clinical activities. Themes in the written reflections were Landmarks and Progress, Struggles and Strategies, Connection and Collaboration, and Joyful Memories for Balance. Coursework artefacts and photographic self-portraits represented all levels of transformative learning from across the curriculum.
Conclusions: Medical students chose artefacts to represent challenging and/or landmark experiences, balanced by experiences that were joyful or fostered peer connection. Novelty influenced choice. To maximise learning students should draw from all experiences, to promote supported reflection with an advisor. Tasks should be timed to coincide with the introduction of new challenges.
{"title":"Struggles and Joys: A Mixed Methods Study of the Artefacts and Reflections in Medical Student Portfolios.","authors":"Jenny McDonald, Wendy Hu, Sylvia Heeneman","doi":"10.5334/pme.1029","DOIUrl":"10.5334/pme.1029","url":null,"abstract":"<p><strong>Introduction: </strong>Portfolios scaffold reflection on experience so students can plan their learning. To elicit reflection, the learning experiences documented in portfolios must be meaningful. To understand what experiences first- and second-year medical students find meaningful, we studied the patterns in the artefacts chosen for portfolios and their associated written reflections.</p><p><strong>Methods: </strong>This explanatory mixed methods study of a longitudinal dataset of 835 artefacts from 37 medical student' portfolios, identified patterns in artefact types over time. Mixed model logistic regression analysis identified time, student and curriculum factors associated with inclusion of the most common types of artefacts. Thematic analysis of participants' reflections about their artefacts provided insight into their choices. Interpretation of the integrated findings was informed by Transformative Learning (TL) theory.</p><p><strong>Results: </strong>Artefact choices changed over time, influenced by curriculum changes and personal factors. In first year, the most common types of artefacts were Problem Based Learning mechanism diagrams and group photos representing classwork; in second year written assignments and 'selfies' representing social and clinical activities. Themes in the written reflections were Landmarks and Progress, Struggles and Strategies, Connection and Collaboration, and Joyful Memories for Balance. Coursework artefacts and photographic self-portraits represented all levels of transformative learning from across the curriculum.</p><p><strong>Conclusions: </strong>Medical students chose artefacts to represent challenging and/or landmark experiences, balanced by experiences that were joyful or fostered peer connection. Novelty influenced choice. To maximise learning students should draw from all experiences, to promote supported reflection with an advisor. Tasks should be timed to coincide with the introduction of new challenges.</p>","PeriodicalId":48532,"journal":{"name":"Perspectives on Medical Education","volume":"3 1","pages":"1-11"},"PeriodicalIF":3.6,"publicationDate":"2024-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10768569/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139378596","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-12-29eCollection Date: 2023-01-01DOI: 10.5334/pme.1035
Zoe Abraham, Carolyn Melro, Sarah Burm
Introduction: During the COVID-19 pandemic, medical schools were forced to suspend in-person interviews and transition to a virtual Multiple Mini Interview (vMMI) format. MMIs typically comprise multiple short assessments overseen by assessors, with the aim of measuring a wide range of non-cognitive competencies. The adaptation to vMMI required medical schools to make swift changes to their MMI structure and delivery. In this paper, we focus on two specific groups greatly impacted by the decision to transition to vMMIs: medical school applicants and MMI assessors.
Methods: We conducted an interpretive qualitative study to explore medical school applicants' and assessors' experiences transitioning to an asynchronous vMMI format. Ten assessors and five medical students from one Canadian medical school participated in semi-structured interviews. Data was analyzed using a thematic analysis framework.
Results: Both applicants and assessors shared a mutual feeling of longing and nostalgia for an interview experience that, due to the pandemic, was understandably adapted. The most obvious forms of loss experienced - albeit in different ways - were: 1) human connection and 2) missed opportunity. Applicants and assessors described several factors that amplified their grief/loss response. These were: 1) resource availability, 2) technological concerns, and 3) the virtual interview environment.
Discussion: While virtual interviewing has obvious advantages, we cannot overlook that asynchronous vMMIs do not lend themselves to the same caliber of interaction and camaraderie as experienced in in-person interviews. We outline several recommendations medical schools can implement to enhance the vMMI experience for applicants and assessors.
{"title":"'Click, I Guess I'm Done': Applicants' and Assessors' Experiences Transitioning to a Virtual Multiple Mini Interview Format.","authors":"Zoe Abraham, Carolyn Melro, Sarah Burm","doi":"10.5334/pme.1035","DOIUrl":"10.5334/pme.1035","url":null,"abstract":"<p><strong>Introduction: </strong>During the COVID-19 pandemic, medical schools were forced to suspend in-person interviews and transition to a virtual Multiple Mini Interview (vMMI) format. MMIs typically comprise multiple short assessments overseen by assessors, with the aim of measuring a wide range of non-cognitive competencies. The adaptation to vMMI required medical schools to make swift changes to their MMI structure and delivery. In this paper, we focus on two specific groups greatly impacted by the decision to transition to vMMIs: medical school applicants and MMI assessors.</p><p><strong>Methods: </strong>We conducted an interpretive qualitative study to explore medical school applicants' and assessors' experiences transitioning to an asynchronous vMMI format. Ten assessors and five medical students from one Canadian medical school participated in semi-structured interviews. Data was analyzed using a thematic analysis framework.</p><p><strong>Results: </strong>Both applicants and assessors shared a mutual feeling of longing and nostalgia for an interview experience that, due to the pandemic, was understandably adapted. The most obvious forms of loss experienced - albeit in different ways - were: 1) human connection and 2) missed opportunity. Applicants and assessors described several factors that amplified their grief/loss response. These were: 1) resource availability, 2) technological concerns, and 3) the virtual interview environment.</p><p><strong>Discussion: </strong>While virtual interviewing has obvious advantages, we cannot overlook that asynchronous vMMIs do not lend themselves to the same caliber of interaction and camaraderie as experienced in in-person interviews. We outline several recommendations medical schools can implement to enhance the vMMI experience for applicants and assessors.</p>","PeriodicalId":48532,"journal":{"name":"Perspectives on Medical Education","volume":"12 1","pages":"594-602"},"PeriodicalIF":3.6,"publicationDate":"2023-12-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10756158/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139075566","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sofie Van Ostaeyen, M. Embo, Tijs Rotsaert, Orphée De Clercq, T. Schellens, Martin Valcke
Introduction: Competency-based education requires high-quality feedback to guide students’ acquisition of competencies. Sound assessment and feedback systems, such as ePortfolios, are needed to facilitate seeking and giving feedback during clinical placements. However, it is unclear whether the written feedback comments in ePortfolios are of high quality and aligned with the current competency focus. Therefore, this study investigates the quality of written feedback comments in ePortfolios of healthcare students, as well as how these feedback comments align with the CanMEDS roles. Methods: A qualitative textual analysis was conducted. 2,349 written feedback comments retrieved from the ePortfolios of 149 healthcare students (specialist medicine, general practice, occupational therapy, speech therapy and midwifery) were analysed retrospectively using deductive content analysis. Two structured categorisation matrices, one based on four literature-derived feedback quality criteria (performance, judgment, elaboration and improvement) and another one on the seven CanMEDS roles (Medical Expert, Communicator, Collaborator, Leader, Health Advocate, Scholar and Professional), guided the analysis. Results: The minority of the feedback comments (n = 352; 14.9%) could be considered of high quality because they met all four quality criteria. Most feedback comments were of moderate quality and met only two to three quality criteria. Regarding the CanMEDS roles, the Medical Expert role was most frequently represented in the feedback comments, as opposed to the roles Leader and Health Advocate. Discussion: The results highlighted that providing high-quality feedback is challenging. To respond to these challenges, it is recommended to set up individual and continuous feedback training.
{"title":"A Qualitative Textual Analysis of Feedback Comments in ePortfolios: Quality and Alignment with the CanMEDS Roles","authors":"Sofie Van Ostaeyen, M. Embo, Tijs Rotsaert, Orphée De Clercq, T. Schellens, Martin Valcke","doi":"10.5334/pme.1050","DOIUrl":"https://doi.org/10.5334/pme.1050","url":null,"abstract":"Introduction: Competency-based education requires high-quality feedback to guide students’ acquisition of competencies. Sound assessment and feedback systems, such as ePortfolios, are needed to facilitate seeking and giving feedback during clinical placements. However, it is unclear whether the written feedback comments in ePortfolios are of high quality and aligned with the current competency focus. Therefore, this study investigates the quality of written feedback comments in ePortfolios of healthcare students, as well as how these feedback comments align with the CanMEDS roles. Methods: A qualitative textual analysis was conducted. 2,349 written feedback comments retrieved from the ePortfolios of 149 healthcare students (specialist medicine, general practice, occupational therapy, speech therapy and midwifery) were analysed retrospectively using deductive content analysis. Two structured categorisation matrices, one based on four literature-derived feedback quality criteria (performance, judgment, elaboration and improvement) and another one on the seven CanMEDS roles (Medical Expert, Communicator, Collaborator, Leader, Health Advocate, Scholar and Professional), guided the analysis. Results: The minority of the feedback comments (n = 352; 14.9%) could be considered of high quality because they met all four quality criteria. Most feedback comments were of moderate quality and met only two to three quality criteria. Regarding the CanMEDS roles, the Medical Expert role was most frequently represented in the feedback comments, as opposed to the roles Leader and Health Advocate. Discussion: The results highlighted that providing high-quality feedback is challenging. To respond to these challenges, it is recommended to set up individual and continuous feedback training.","PeriodicalId":48532,"journal":{"name":"Perspectives on Medical Education","volume":"4 3","pages":"584 - 593"},"PeriodicalIF":3.6,"publicationDate":"2023-12-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138947549","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Introduction: Schwartz Rounds (“Rounds”) are a confidential group reflection forum, increasingly adopted to support pre-registration healthcare students. This realist review aims to understand what the available literature and key informant interviews can tell us about Rounds in this setting, asking what works, for whom, in what circumstances, and why? Methods: Published literature discussing Rounds in undergraduate settings were analysed using realist methods to describe how, for whom and in which contexts Rounds work. Four key informants were interviewed using realist methods, to further develop, test and refine a programme theory of Rounds in undergraduate settings. Results: We identified five core features and five contextual adaptations. Core: Rounds provide a reflective space to discuss emotional challenges; Rounds promote an open and humanised professional culture; Rounds offer role-modelling of vulnerability, enabling interpersonal connectedness; Rounds are impactful when focused on emotional and relational elements; Rounds offer reflective insights from a wide range of perspectives. Contextual adaptations: Rounds allow reflection to be more engaging for students when they are non-mandatory; perceptions of safety within a Round varies based on multiple factors; adapting timing and themes to students’ changing needs may improve engagement; resonance with stories is affected by clinical experience levels; online adaptation can increase reach but may risk psychological safety. Discussion: Schwartz Rounds are a unique intervention that can support healthcare students through their pre-registration education. The five “core” and five “contextual adaptation” features presented identify important considerations for organisations implementing Rounds for their undergraduates.
{"title":"How Does a Group Reflection Intervention (Schwartz Rounds) Work within Healthcare Undergraduate Settings? A Realist Review","authors":"Duncan Hamilton, Cath Taylor, J. Maben","doi":"10.5334/pme.930","DOIUrl":"https://doi.org/10.5334/pme.930","url":null,"abstract":"Introduction: Schwartz Rounds (“Rounds”) are a confidential group reflection forum, increasingly adopted to support pre-registration healthcare students. This realist review aims to understand what the available literature and key informant interviews can tell us about Rounds in this setting, asking what works, for whom, in what circumstances, and why? Methods: Published literature discussing Rounds in undergraduate settings were analysed using realist methods to describe how, for whom and in which contexts Rounds work. Four key informants were interviewed using realist methods, to further develop, test and refine a programme theory of Rounds in undergraduate settings. Results: We identified five core features and five contextual adaptations. Core: Rounds provide a reflective space to discuss emotional challenges; Rounds promote an open and humanised professional culture; Rounds offer role-modelling of vulnerability, enabling interpersonal connectedness; Rounds are impactful when focused on emotional and relational elements; Rounds offer reflective insights from a wide range of perspectives. Contextual adaptations: Rounds allow reflection to be more engaging for students when they are non-mandatory; perceptions of safety within a Round varies based on multiple factors; adapting timing and themes to students’ changing needs may improve engagement; resonance with stories is affected by clinical experience levels; online adaptation can increase reach but may risk psychological safety. Discussion: Schwartz Rounds are a unique intervention that can support healthcare students through their pre-registration education. The five “core” and five “contextual adaptation” features presented identify important considerations for organisations implementing Rounds for their undergraduates.","PeriodicalId":48532,"journal":{"name":"Perspectives on Medical Education","volume":" 21","pages":"550 - 564"},"PeriodicalIF":3.6,"publicationDate":"2023-12-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138961542","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
L. Lingard, Madawa Chandritilake, Merel De Heer, J. Klasen, Fury Maulina, Francisco M. Olmos-Vega, Christina St-Onge
ChatGPT has been widely heralded as a way to level the playing field in scientific communication through its free language editing service. However, such claims lack systematic evidence. A writing scholar (LL) and six non-native English scholars researching health professions education collaborated on this Writer’s Craft to fill this gap. Our overarching aim was to provide experiential evidence about ChatGPT’s performance as a language editor and writing coach. We implemented three cycles of a systematic procedure, describing how we developed our prompts, selected text for editing, incrementally prompted to refine ChatGPT’s responses, and analyzed the quality of its language edits and explanations. From this experience, we offer five insights, and we conclude that the optimism about ChatGPT’s capacity to level the playing field for non-native English writers should be tempered. In the writer’s craft section we offer simple tips to improve your writing in one of three areas: Energy, Clarity and Persuasiveness. Each entry focuses on a key writing feature or strategy, illustrates how it commonly goes wrong, teaches the grammatical underpinnings necessary to understand it and offers suggestions to wield it effectively. We encourage readers to share comments on or suggestions for this section on Twitter, using the hashtag: #how’syourwriting?
{"title":"Will ChatGPT’s Free Language Editing Service Level the Playing Field in Science Communication?: Insights from a Collaborative Project with Non-native English Scholars","authors":"L. Lingard, Madawa Chandritilake, Merel De Heer, J. Klasen, Fury Maulina, Francisco M. Olmos-Vega, Christina St-Onge","doi":"10.5334/pme.1246","DOIUrl":"https://doi.org/10.5334/pme.1246","url":null,"abstract":"ChatGPT has been widely heralded as a way to level the playing field in scientific communication through its free language editing service. However, such claims lack systematic evidence. A writing scholar (LL) and six non-native English scholars researching health professions education collaborated on this Writer’s Craft to fill this gap. Our overarching aim was to provide experiential evidence about ChatGPT’s performance as a language editor and writing coach. We implemented three cycles of a systematic procedure, describing how we developed our prompts, selected text for editing, incrementally prompted to refine ChatGPT’s responses, and analyzed the quality of its language edits and explanations. From this experience, we offer five insights, and we conclude that the optimism about ChatGPT’s capacity to level the playing field for non-native English writers should be tempered.\u0000In the writer’s craft section we offer simple tips to improve your writing in one of three areas: Energy, Clarity and Persuasiveness. Each entry focuses on a key writing feature or strategy, illustrates how it commonly goes wrong, teaches the grammatical underpinnings necessary to understand it and offers suggestions to wield it effectively. We encourage readers to share comments on or suggestions for this section on Twitter, using the hashtag: #how’syourwriting?","PeriodicalId":48532,"journal":{"name":"Perspectives on Medical Education","volume":" 984","pages":""},"PeriodicalIF":3.6,"publicationDate":"2023-12-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138960398","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Purpose: Increased attention to improving a culture of belonging in clinical learning environments has led to various approaches to addressing microaggressions. However, most approaches in the literature focus on responding or reacting to microaggressions with insufficient attention to building trust before microaggressions might occur. Research on microaggressions in clinical learning environments suggests anticipatory or pre-emptive conversations about microaggressions may foster greater trust. In this study, the authors explored how diverse participants perceived the experience of anticipatory conversations about potential microaggressions. Overall, the authors sought to gain a deeper understanding of how pre-emptive and anticipatory conversations may influence an organization’s approach to addressing microaggressions in clinical learning environments. Methods: The authors utilized constructivist grounded theory methodology and conducted individual qualitative interviews with 21 participants in an academic department within a larger health sciences center in the United States. Results: Findings suggest that anticipatory conversations about microaggressions were challenging due to existing norms in dynamic clinical learning and working environments. Participants shared that the idea of anticipating microaggressions elicited dissonance. Conversations about microaggressions could potentially be facilitated through leaders who role model vulnerability, organizational supports, and an individualized approach for each team member and their role within a complex hierarchical organization. Discussion: Anticipating and addressing microaggressions in clinical learning environments holds tremendous potential, however, any conversations about personal identity remain challenging in medical and healthcare environments. This study suggests that any attempts to address microaggressions requires attention to cultural norms within healthcare environments and the ways that hierarchical organizations can constrain individual agency.
{"title":"It is Challenging to Shift the Norm: Exploring how to Anticipate and Address Microaggressions in Clinical Learning Environments","authors":"J. Sukhera, Tess M. Atkinson, Justin L. Bullock","doi":"10.5334/pme.1251","DOIUrl":"https://doi.org/10.5334/pme.1251","url":null,"abstract":"Purpose: Increased attention to improving a culture of belonging in clinical learning environments has led to various approaches to addressing microaggressions. However, most approaches in the literature focus on responding or reacting to microaggressions with insufficient attention to building trust before microaggressions might occur. Research on microaggressions in clinical learning environments suggests anticipatory or pre-emptive conversations about microaggressions may foster greater trust. In this study, the authors explored how diverse participants perceived the experience of anticipatory conversations about potential microaggressions. Overall, the authors sought to gain a deeper understanding of how pre-emptive and anticipatory conversations may influence an organization’s approach to addressing microaggressions in clinical learning environments. Methods: The authors utilized constructivist grounded theory methodology and conducted individual qualitative interviews with 21 participants in an academic department within a larger health sciences center in the United States. Results: Findings suggest that anticipatory conversations about microaggressions were challenging due to existing norms in dynamic clinical learning and working environments. Participants shared that the idea of anticipating microaggressions elicited dissonance. Conversations about microaggressions could potentially be facilitated through leaders who role model vulnerability, organizational supports, and an individualized approach for each team member and their role within a complex hierarchical organization. Discussion: Anticipating and addressing microaggressions in clinical learning environments holds tremendous potential, however, any conversations about personal identity remain challenging in medical and healthcare environments. This study suggests that any attempts to address microaggressions requires attention to cultural norms within healthcare environments and the ways that hierarchical organizations can constrain individual agency.","PeriodicalId":48532,"journal":{"name":"Perspectives on Medical Education","volume":" 91","pages":"575 - 583"},"PeriodicalIF":3.6,"publicationDate":"2023-12-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138961156","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sofie Van Ostaeyen, Loic De Langhe, Orphée De Clercq, M. Embo, T. Schellens, M. Valcke
Introduction: Manually analysing the quality of large amounts of written feedback comments is time-consuming and demands extensive resources and human effort. Therefore, this study aimed to explore whether a state-of-the-art large language model (LLM) could be fine-tuned to identify the presence of four literature-derived feedback quality criteria (performance, judgment, elaboration and improvement) and the seven CanMEDS roles (Medical Expert, Communicator, Collaborator, Leader, Health Advocate, Scholar and Professional) in written feedback comments. Methods: A set of 2,349 labelled feedback comments of five healthcare educational programs in Flanders (Belgium) (specialistic medicine, general practice, midwifery, speech therapy and occupational therapy) was split into 12,452 sentences to create two datasets for the machine learning analysis. The Dutch BERT models BERTje and RobBERT were used to train four multiclass-multilabel classification models: two to identify the four feedback quality criteria and two to identify the seven CanMEDS roles. Results: The classification models trained with BERTje and RobBERT to predict the presence of the four feedback quality criteria attained macro average F1-scores of 0.73 and 0.76, respectively. The F1-score of the model predicting the presence of the CanMEDS roles trained with BERTje was 0.71 and 0.72 with RobBERT. Discussion: The results showed that a state-of-the-art LLM is able to identify the presence of the four feedback quality criteria and the CanMEDS roles in written feedback comments. This implies that the quality analysis of written feedback comments can be automated using an LLM, leading to savings of time and resources.
{"title":"Automating the Identification of Feedback Quality Criteria and the CanMEDS Roles in Written Feedback Comments Using Natural Language Processing","authors":"Sofie Van Ostaeyen, Loic De Langhe, Orphée De Clercq, M. Embo, T. Schellens, M. Valcke","doi":"10.5334/pme.1056","DOIUrl":"https://doi.org/10.5334/pme.1056","url":null,"abstract":"Introduction: Manually analysing the quality of large amounts of written feedback comments is time-consuming and demands extensive resources and human effort. Therefore, this study aimed to explore whether a state-of-the-art large language model (LLM) could be fine-tuned to identify the presence of four literature-derived feedback quality criteria (performance, judgment, elaboration and improvement) and the seven CanMEDS roles (Medical Expert, Communicator, Collaborator, Leader, Health Advocate, Scholar and Professional) in written feedback comments. Methods: A set of 2,349 labelled feedback comments of five healthcare educational programs in Flanders (Belgium) (specialistic medicine, general practice, midwifery, speech therapy and occupational therapy) was split into 12,452 sentences to create two datasets for the machine learning analysis. The Dutch BERT models BERTje and RobBERT were used to train four multiclass-multilabel classification models: two to identify the four feedback quality criteria and two to identify the seven CanMEDS roles. Results: The classification models trained with BERTje and RobBERT to predict the presence of the four feedback quality criteria attained macro average F1-scores of 0.73 and 0.76, respectively. The F1-score of the model predicting the presence of the CanMEDS roles trained with BERTje was 0.71 and 0.72 with RobBERT. Discussion: The results showed that a state-of-the-art LLM is able to identify the presence of the four feedback quality criteria and the CanMEDS roles in written feedback comments. This implies that the quality analysis of written feedback comments can be automated using an LLM, leading to savings of time and resources.","PeriodicalId":48532,"journal":{"name":"Perspectives on Medical Education","volume":"83 s369","pages":"540 - 549"},"PeriodicalIF":3.6,"publicationDate":"2023-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138995320","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-12-15eCollection Date: 2023-01-01DOI: 10.5334/pme.1053
Catherine M Giroux, Lauren A Maggio, Conchita Saldanha, André Bussières, Aliki Thomas
Introduction: Social media may facilitate knowledge sharing within health professions education (HPE), but whether and how it is used as a mechanism of knowledge translation (KT) is not understood. This exploratory study aimed to ascertain what content has been shared on Twitter using #MedEd and how it is used as a mechanism of KT.
Methods: Symplur was used to identify all tweets tagged with #MedEd between March 2021 - March 2022. A directed content analysis and multiple cycles of coding were employed. 18,000 tweets were identified, of which 478 were included. Studies sharing high quality HPE information; relating to undergraduate, postgraduate, or continuing education; referring to an evidence source; and posted in English or French were included.
Results: Diverse content was shared using #MedEd, including original tweets, links to peer-reviewed articles, and visual media. Tweets shared information about new educational approaches; system, clinical, or educational research outcomes; and measurement tools. #MedEd appears to be a mechanism of diffusion (n = 296 tweets) and dissemination (n = 164 tweets). It is less frequently used for knowledge exchange (n = 13 tweets) and knowledge synthesis (n = 5 tweets). No tweets demonstrated the ethically sound application of knowledge.
Discussion: It is challenging to determine whether and how #MedEd is used to promote the uptake of knowledge into HPE or if it is even possible for Twitter to serve these purposes. Further studies exploring how health professions educators use the knowledge gained from Twitter to inform their educational or clinical practices are recommended.
{"title":"Twitter as a Mechanism of Knowledge Translation in Health Professions Education: An Exploratory Content Analysis.","authors":"Catherine M Giroux, Lauren A Maggio, Conchita Saldanha, André Bussières, Aliki Thomas","doi":"10.5334/pme.1053","DOIUrl":"https://doi.org/10.5334/pme.1053","url":null,"abstract":"<p><strong>Introduction: </strong>Social media may facilitate knowledge sharing within health professions education (HPE), but whether and how it is used as a mechanism of knowledge translation (KT) is not understood. This exploratory study aimed to ascertain what content has been shared on Twitter using #MedEd and how it is used as a mechanism of KT.</p><p><strong>Methods: </strong>Symplur was used to identify all tweets tagged with #MedEd between March 2021 - March 2022. A directed content analysis and multiple cycles of coding were employed. 18,000 tweets were identified, of which 478 were included. Studies sharing high quality HPE information; relating to undergraduate, postgraduate, or continuing education; referring to an evidence source; and posted in English or French were included.</p><p><strong>Results: </strong>Diverse content was shared using #MedEd, including original tweets, links to peer-reviewed articles, and visual media. Tweets shared information about new educational approaches; system, clinical, or educational research outcomes; and measurement tools. #MedEd appears to be a mechanism of diffusion (n = 296 tweets) and dissemination (n = 164 tweets). It is less frequently used for knowledge exchange (n = 13 tweets) and knowledge synthesis (n = 5 tweets). No tweets demonstrated the ethically sound application of knowledge.</p><p><strong>Discussion: </strong>It is challenging to determine whether and how #MedEd is used to promote the uptake of knowledge into HPE or if it is even possible for Twitter to serve these purposes. Further studies exploring how health professions educators use the knowledge gained from Twitter to inform their educational or clinical practices are recommended.</p>","PeriodicalId":48532,"journal":{"name":"Perspectives on Medical Education","volume":"12 1","pages":"529-539"},"PeriodicalIF":3.6,"publicationDate":"2023-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10723015/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138812678","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-07eCollection Date: 2023-01-01DOI: 10.5334/pme.1161
Maham Rehman, Divya Santhanam, Javeed Sukhera
Introduction: Despite increasing attention to improving equity, diversity, and inclusion in academic medicine, a theoretically informed perspective to advancing equity is often missing. Intersectionality is a theoretical framework that refers to the study of the dynamic nature of social categories with which an individual identifies and their unique localization within power structures. Intersectionality can be a useful lens to understand and address inequity, however, there is limited literature on intersectionality in the context of medical education. Thus, we explored how intersectionality has been conceptualized and applied in medical education.
Methods: We employed a meta-narrative review, analyzing existing literature on intersectionality theory and frameworks in medical education. Three electronic databases were searched using key terms yielding 32 articles. After, title, abstract and full-text screening 14articles were included. Analysis of articles sought a meaningful synthesis on application of intersectionality theory to medical education.
Results: Existing literature on intersectionality discussesthe role of identity categorization and the relationship between identity, power, and social change. There are contrasting narratives on the practical application of intersectionality to medical education, producing tensions between how intersectionality is understood as theory and how it is translated in practice.
Discussion: A paucity in literature on intersectionality in medical education suggests that there is a risk intersectionality may be understood in a superficial manner and considered a synonym for diversity. Drawing explicit attention to its core tenets of reflexivity, transformational identity, and analysis of power is important to maintain fidelity to how intersectionality is understood in broader critical social science literature.
{"title":"Intersectionality in Medical Education: A Meta-Narrative Review.","authors":"Maham Rehman, Divya Santhanam, Javeed Sukhera","doi":"10.5334/pme.1161","DOIUrl":"10.5334/pme.1161","url":null,"abstract":"<p><strong>Introduction: </strong>Despite increasing attention to improving equity, diversity, and inclusion in academic medicine, a theoretically informed perspective to advancing equity is often missing. Intersectionality is a theoretical framework that refers to the study of the dynamic nature of social categories with which an individual identifies and their unique localization within power structures. Intersectionality can be a useful lens to understand and address inequity, however, there is limited literature on intersectionality in the context of medical education. Thus, we explored how intersectionality has been conceptualized and applied in medical education.</p><p><strong>Methods: </strong>We employed a meta-narrative review, analyzing existing literature on intersectionality theory and frameworks in medical education. Three electronic databases were searched using key terms yielding 32 articles. After, title, abstract and full-text screening 14articles were included. Analysis of articles sought a meaningful synthesis on application of intersectionality theory to medical education.</p><p><strong>Results: </strong>Existing literature on intersectionality discussesthe role of identity categorization and the relationship between identity, power, and social change. There are contrasting narratives on the practical application of intersectionality to medical education, producing tensions between how intersectionality is understood as theory and how it is translated in practice.</p><p><strong>Discussion: </strong>A paucity in literature on intersectionality in medical education suggests that there is a risk intersectionality may be understood in a superficial manner and considered a synonym for diversity. Drawing explicit attention to its core tenets of reflexivity, transformational identity, and analysis of power is important to maintain fidelity to how intersectionality is understood in broader critical social science literature.</p>","PeriodicalId":48532,"journal":{"name":"Perspectives on Medical Education","volume":"12 1","pages":"517-528"},"PeriodicalIF":3.6,"publicationDate":"2023-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10637289/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89720034","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}