Pub Date : 2025-03-01Epub Date: 2024-09-16DOI: 10.1080/0142159X.2024.2402032
Nicola Cunningham, Helmy Cook, Julia Harrison
Diagnostic error is a significant category within preventable patient harm, and it takes many years of effort to develop proficiency in diagnostic reasoning. One of the key challenges medical schools must address is preparing students for the complexity, uncertainty and clinical responsibility in going from student to doctor. Recognising the importance of both cognitive and systems-related factors in diagnostic accuracy, we designed the QUID Prompt (Questions to Use for Improving Diagnosis) for students to refer to at the bedside. This set of questions prompts careful consideration, analysis, and signposting of decision-making processes, to assist students in transitioning from medical school to the real-world of work and achieving diagnostic excellence in clinical settings.
诊断错误是可预防的患者伤害中的一个重要类别,需要多年的努力才能培养出熟练的诊断推理能力。医学院必须应对的主要挑战之一,是让学生做好准备,应对从学生到医生这一过程中的复杂性、不确定性和临床责任。认识到认知因素和系统相关因素对诊断准确性的重要性,我们设计了 QUID 提示(用于改进诊断的问题),供学生在床边参考。这组问题促使学生仔细考虑、分析和指明决策过程,以帮助学生从医学院过渡到实际工作中,并在临床环境中实现卓越诊断。
{"title":"Enabling diagnostic excellence in the real world: Managing complexity, uncertainty and clinical responsibility.","authors":"Nicola Cunningham, Helmy Cook, Julia Harrison","doi":"10.1080/0142159X.2024.2402032","DOIUrl":"10.1080/0142159X.2024.2402032","url":null,"abstract":"<p><p>Diagnostic error is a significant category within preventable patient harm, and it takes many years of effort to develop proficiency in diagnostic reasoning. One of the key challenges medical schools must address is preparing students for the complexity, uncertainty and clinical responsibility in going from student to doctor. Recognising the importance of both cognitive and systems-related factors in diagnostic accuracy, we designed the QUID Prompt (Questions to Use for Improving Diagnosis) for students to refer to at the bedside. This set of questions prompts careful consideration, analysis, and signposting of decision-making processes, to assist students in transitioning from medical school to the real-world of work and achieving diagnostic excellence in clinical settings.</p>","PeriodicalId":18643,"journal":{"name":"Medical Teacher","volume":" ","pages":"404-406"},"PeriodicalIF":3.3,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142291255","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-03-01Epub Date: 2024-04-30DOI: 10.1080/0142159X.2024.2345269
Danielle T Miller, Sarah Michael, Colin Bell, Cody H Brevik, Bonnie Kaplan, Ellie Svoboda, John Kendall
Purpose: Assessment in medical education has changed over time to measure the evolving skills required of current medical practice. Physical and biophysical markers of assessment attempt to use technology to gain insight into medical trainees' knowledge, skills, and attitudes. The authors conducted a scoping review to map the literature on the use of physical and biophysical markers of assessment in medical training.
Materials and methods: The authors searched seven databases on 1 August 2022, for publications that utilized physical or biophysical markers in the assessment of medical trainees (medical students, residents, fellows, and synonymous terms used in other countries). Physical or biophysical markers included: heart rate and heart rate variability, visual tracking and attention, pupillometry, hand motion analysis, skin conductivity, salivary cortisol, functional magnetic resonance imaging (fMRI), and functional near-infrared spectroscopy (fNIRS). The authors mapped the relevant literature using Bloom's taxonomy of knowledge, skills, and attitudes and extracted additional data including study design, study environment, and novice vs. expert differentiation from February to June 2023.
Results: Of 6,069 unique articles, 443 met inclusion criteria. The majority of studies assessed trainees using heart rate variability (n = 160, 36%) followed by visual attention (n = 143, 32%), hand motion analysis (n = 67, 15%), salivary cortisol (n = 67, 15%), fMRI (n = 29, 7%), skin conductivity (n = 26, 6%), fNIRs (n = 19, 4%), and pupillometry (n = 16, 4%). The majority of studies (n = 167, 38%) analyzed non-technical skills, followed by studies that analyzed technical skills (n = 155, 35%), knowledge (n = 114, 26%), and attitudinal skills (n = 61, 14%). 169 studies (38%) attempted to use physical or biophysical markers to differentiate between novice and expert.
Conclusion: This review provides a comprehensive description of the current use of physical and biophysical markers in medical education training, including the current technology and skills assessed. Additionally, while physical and biophysical markers have the potential to augment current assessment in medical education, there remains significant gaps in research surrounding reliability, validity, cost, practicality, and educational impact of implementing these markers of assessment.
{"title":"Physical and biophysical markers of assessment in medical training: A scoping review of the literature.","authors":"Danielle T Miller, Sarah Michael, Colin Bell, Cody H Brevik, Bonnie Kaplan, Ellie Svoboda, John Kendall","doi":"10.1080/0142159X.2024.2345269","DOIUrl":"10.1080/0142159X.2024.2345269","url":null,"abstract":"<p><strong>Purpose: </strong>Assessment in medical education has changed over time to measure the evolving skills required of current medical practice. Physical and biophysical markers of assessment attempt to use technology to gain insight into medical trainees' knowledge, skills, and attitudes. The authors conducted a scoping review to map the literature on the use of physical and biophysical markers of assessment in medical training.</p><p><strong>Materials and methods: </strong>The authors searched seven databases on 1 August 2022, for publications that utilized physical or biophysical markers in the assessment of medical trainees (medical students, residents, fellows, and synonymous terms used in other countries). Physical or biophysical markers included: heart rate and heart rate variability, visual tracking and attention, pupillometry, hand motion analysis, skin conductivity, salivary cortisol, functional magnetic resonance imaging (fMRI), and functional near-infrared spectroscopy (fNIRS). The authors mapped the relevant literature using Bloom's taxonomy of knowledge, skills, and attitudes and extracted additional data including study design, study environment, and novice vs. expert differentiation from February to June 2023.</p><p><strong>Results: </strong>Of 6,069 unique articles, 443 met inclusion criteria. The majority of studies assessed trainees using heart rate variability (<i>n</i> = 160, 36%) followed by visual attention (<i>n</i> = 143, 32%), hand motion analysis (<i>n</i> = 67, 15%), salivary cortisol (<i>n</i> = 67, 15%), fMRI (<i>n</i> = 29, 7%), skin conductivity (<i>n</i> = 26, 6%), fNIRs (<i>n</i> = 19, 4%), and pupillometry (<i>n</i> = 16, 4%). The majority of studies (<i>n</i> = 167, 38%) analyzed non-technical skills, followed by studies that analyzed technical skills (<i>n</i> = 155, 35%), knowledge (<i>n</i> = 114, 26%), and attitudinal skills (<i>n</i> = 61, 14%). 169 studies (38%) attempted to use physical or biophysical markers to differentiate between novice and expert.</p><p><strong>Conclusion: </strong>This review provides a comprehensive description of the current use of physical and biophysical markers in medical education training, including the current technology and skills assessed. Additionally, while physical and biophysical markers have the potential to augment current assessment in medical education, there remains significant gaps in research surrounding reliability, validity, cost, practicality, and educational impact of implementing these markers of assessment.</p>","PeriodicalId":18643,"journal":{"name":"Medical Teacher","volume":" ","pages":"436-444"},"PeriodicalIF":3.3,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140861971","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-03-01Epub Date: 2024-05-21DOI: 10.1080/0142159X.2024.2351137
Miriam Alexander, Ronja Behrend, Anne Franz, Harm Peters
Purpose: The concept of Entrustable Professional Activities (EPA) is increasingly used to operationalize learning in the clinical workplace, yet little is known about the emotions of learners feeling the responsibility when carrying out professional tasks.
Methods: We explored the emotional experiences of medical students in their final clerkship year when performing clinical tasks. We used an online reflective diary. Text entries were analysed using inductive-deductive content analysis with reference to the EPA framework and the control-value theory of achievement emotions.
Results: Students described a wide range of emotions related to carrying out various clinical tasks. They reported positive-activating emotions, ranging from enjoyment to relaxation, and negative-deactivating emotions, ranging from anxiety to boredom. Emotions varied across individual students and were related to the characteristics of a task, an increasing level of autonomy, the students' perceived ability to perform a task and the level of supervision provided.
Discussion: Emotions are widely present and impact on the workplace learning of medical students which is related to key elements of the EPA framework. Supervisors play a key role in eliciting positive-activating emotions and the motivation to learn by providing a level of supervision and guidance appropriate to the students' perceived ability to perform the task.
目的:"可委托专业活动"(Entrustable Professional Activities,EPA)的概念越来越多地被用于临床工作场所的学习操作,但人们对学习者在执行专业任务时感受责任的情绪却知之甚少:我们探讨了最后一年实习的医学生在执行临床任务时的情感体验。我们使用了在线反思日记。参照 EPA 框架和成就情绪的控制价值理论,使用归纳-演绎内容分析法对文本条目进行了分析:结果:学生们描述了与执行各种临床任务相关的各种情绪。他们报告了从享受到放松的积极激活情绪,以及从焦虑到无聊的消极激活情绪。学生的情绪因人而异,并与任务的特点、自主程度的提高、学生对完成任务能力的感知以及提供的监督水平有关:讨论:情绪广泛存在并影响医学生的工作场所学习,这与 EPA 框架的关键要素有关。督导人员通过提供与学生完成任务的感知能力相适应的督导和指导水平,在激发积极活跃的情绪和学习动机方面发挥着关键作用。
{"title":"Feeling the responsibility: Exploring the emotional experiences of final-year medical students when carrying out clinical tasks.","authors":"Miriam Alexander, Ronja Behrend, Anne Franz, Harm Peters","doi":"10.1080/0142159X.2024.2351137","DOIUrl":"10.1080/0142159X.2024.2351137","url":null,"abstract":"<p><strong>Purpose: </strong>The concept of Entrustable Professional Activities (EPA) is increasingly used to operationalize learning in the clinical workplace, yet little is known about the emotions of learners feeling the responsibility when carrying out professional tasks.</p><p><strong>Methods: </strong>We explored the emotional experiences of medical students in their final clerkship year when performing clinical tasks. We used an online reflective diary. Text entries were analysed using inductive-deductive content analysis with reference to the EPA framework and the control-value theory of achievement emotions.</p><p><strong>Results: </strong>Students described a wide range of emotions related to carrying out various clinical tasks. They reported positive-activating emotions, ranging from enjoyment to relaxation, and negative-deactivating emotions, ranging from anxiety to boredom. Emotions varied across individual students and were related to the characteristics of a task, an increasing level of autonomy, the students' perceived ability to perform a task and the level of supervision provided.</p><p><strong>Discussion: </strong>Emotions are widely present and impact on the workplace learning of medical students which is related to key elements of the EPA framework. Supervisors play a key role in eliciting positive-activating emotions and the motivation to learn by providing a level of supervision and guidance appropriate to the students' perceived ability to perform the task.</p>","PeriodicalId":18643,"journal":{"name":"Medical Teacher","volume":" ","pages":"513-520"},"PeriodicalIF":3.3,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141076241","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-03-01Epub Date: 2024-05-21DOI: 10.1080/0142159X.2024.2345266
Celia Brown, Sarah Khavandi, Ann Sebastian, Kerry Badger, Rachel Westacott, Malcolm W R Reed, Mark Gurnell, Amir H Sam
Purpose: Delivering fair and reliable summative assessments in medical education assumes examiner decision making is devoid of bias. We investigated whether candidate racial appearances influenced examiner ratings in undergraduate clinical exams.
Methods: We used an internet-based design. Examiners watched a randomised set of six videos of three different white candidates and three different non-white (Asian, black and Chinese) candidates taking a clinical history at either fail, borderline or pass grades. We compared the median and interquartile range (IQR) of the paired difference between scores for the white and non-white candidates at each performance grade and tested for statistical significance.
Results: 160 Examiners participated. At the fail grade, the black and Chinese candidates scored lower than the white candidate, with median paired differences of -2.5 and -1 respectively (both p < 0.001). At the borderline grade, the black and Chinese candidates scored higher than the white candidate, with median paired differences of +2 and +3, respectively (both p < 0.001). At the passing grade, the Asian candidate scored lower than the white candidate (median paired difference -1, p < 0.001).
Conclusion: The racial appearance of candidates appeared to influence the scores awarded by examiners, but not in a uniform manner.
目的:在医学教育中进行公平、可靠的终结性评估时,考官的决策必须不带偏见。我们调查了在本科临床考试中,考生的种族外表是否会影响考官的评分:方法:我们采用了基于互联网的设计。方法:我们采用了基于互联网的设计,考官随机观看了一组六段视频,分别是三名不同的白人考生和三名不同的非白人(亚洲人、黑人和中国人)考生在不及格、临界或及格等级时的临床病史。我们比较了白人和非白人考生在每个成绩等级上的分数配对差异的中位数和四分位距(IQR),并进行了统计学意义检验:共有 160 名考官参加。在不及格等级中,黑人和华裔考生的得分低于白人考生,配对差异的中位数分别为-2.5和-1(均为p p p p 结论:黑人和华裔考生的得分低于白人考生,配对差异的中位数分别为-2.5和-1:考生的种族外貌似乎会影响考官的评分,但影响的方式并不一致。
{"title":"The influence of candidates' race on examiners' ratings in standardised assessments of clinical practice.","authors":"Celia Brown, Sarah Khavandi, Ann Sebastian, Kerry Badger, Rachel Westacott, Malcolm W R Reed, Mark Gurnell, Amir H Sam","doi":"10.1080/0142159X.2024.2345266","DOIUrl":"10.1080/0142159X.2024.2345266","url":null,"abstract":"<p><strong>Purpose: </strong>Delivering fair and reliable summative assessments in medical education assumes examiner decision making is devoid of bias. We investigated whether candidate racial appearances influenced examiner ratings in undergraduate clinical exams.</p><p><strong>Methods: </strong>We used an internet-based design. Examiners watched a randomised set of six videos of three different white candidates and three different non-white (Asian, black and Chinese) candidates taking a clinical history at either fail, borderline or pass grades. We compared the median and interquartile range (IQR) of the paired difference between scores for the white and non-white candidates at each performance grade and tested for statistical significance.</p><p><strong>Results: </strong>160 Examiners participated. At the fail grade, the black and Chinese candidates scored lower than the white candidate, with median paired differences of -2.5 and -1 respectively (both <i>p</i> < 0.001). At the borderline grade, the black and Chinese candidates scored higher than the white candidate, with median paired differences of +2 and +3, respectively (both <i>p</i> < 0.001). At the passing grade, the Asian candidate scored lower than the white candidate (median paired difference -1, <i>p</i> < 0.001).</p><p><strong>Conclusion: </strong>The racial appearance of candidates appeared to influence the scores awarded by examiners, but not in a uniform manner.</p>","PeriodicalId":18643,"journal":{"name":"Medical Teacher","volume":" ","pages":"492-497"},"PeriodicalIF":3.3,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141076245","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-03-01Epub Date: 2024-05-28DOI: 10.1080/0142159X.2024.2345267
Laurie M Aluce, Julie J Cooper, Lillian Liang Emlet, Elaine R Cohen, Simon J Ostrowski, Gordon J Wood, Julia H Vermylen
Purpose: Serious illness communication skills are essential for physicians, yet competency-based training is lacking. We address scalability barriers to competency-based communication skills training by assessing the feasibility of a multi-center, virtual simulation-based mastery learning (vSBML) curriculum on breaking bad news (BBN).
Methods: First-year emergency medicine residents at three academic medical centers participated in the virtual curriculum. Participants completed a pretest with a standardized patient (SP), a workshop with didactics and small group roleplay with SPs, a posttest with an SP, and additional deliberate practice sessions if needed to achieve the minimum passing standard (MPS). Participants were assessed using a previously published BBN assessment tool that included a checklist and scaled items. Authors compared pre- and posttests to evaluate the impact of the curriculum.
Results: Twenty-eight (90%) of 31 eligible residents completed the curriculum. Eighty-nine percent of participants did not meet the MPS at pretest. Post-intervention, there was a statistically significant improvement in checklist performance (Median= 93% vs. 53%, p < 0.001) and on all scaled items assessing quality of communication. All participants ultimately achieved the MPS.
Conclusions: A multi-site vSBML curriculum brought all participants to mastery in the core communication skill of BBN and represents a feasible, scalable model to incorporate competency-based communication skills education in a widespread manner.
{"title":"Bringing competency-based communication training to scale: A multi-institutional virtual simulation-based mastery learning curriculum for Emergency Medicine residents.","authors":"Laurie M Aluce, Julie J Cooper, Lillian Liang Emlet, Elaine R Cohen, Simon J Ostrowski, Gordon J Wood, Julia H Vermylen","doi":"10.1080/0142159X.2024.2345267","DOIUrl":"10.1080/0142159X.2024.2345267","url":null,"abstract":"<p><strong>Purpose: </strong>Serious illness communication skills are essential for physicians, yet competency-based training is lacking. We address scalability barriers to competency-based communication skills training by assessing the feasibility of a multi-center, virtual simulation-based mastery learning (vSBML) curriculum on breaking bad news (BBN).</p><p><strong>Methods: </strong>First-year emergency medicine residents at three academic medical centers participated in the virtual curriculum. Participants completed a pretest with a standardized patient (SP), a workshop with didactics and small group roleplay with SPs, a posttest with an SP, and additional deliberate practice sessions if needed to achieve the minimum passing standard (MPS). Participants were assessed using a previously published BBN assessment tool that included a checklist and scaled items. Authors compared pre- and posttests to evaluate the impact of the curriculum.</p><p><strong>Results: </strong>Twenty-eight (90%) of 31 eligible residents completed the curriculum. Eighty-nine percent of participants did not meet the MPS at pretest. Post-intervention, there was a statistically significant improvement in checklist performance (Median= 93% vs. 53%, <i>p</i> < 0.001) and on all scaled items assessing quality of communication. All participants ultimately achieved the MPS.</p><p><strong>Conclusions: </strong>A multi-site vSBML curriculum brought all participants to mastery in the core communication skill of BBN and represents a feasible, scalable model to incorporate competency-based communication skills education in a widespread manner.</p>","PeriodicalId":18643,"journal":{"name":"Medical Teacher","volume":" ","pages":"505-512"},"PeriodicalIF":3.3,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141157978","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-03-01Epub Date: 2024-11-02DOI: 10.1080/0142159X.2024.2422009
Supianto
{"title":"Bridging the gap in teaching self-regulated learning: A call for deeper integration.","authors":"Supianto","doi":"10.1080/0142159X.2024.2422009","DOIUrl":"10.1080/0142159X.2024.2422009","url":null,"abstract":"","PeriodicalId":18643,"journal":{"name":"Medical Teacher","volume":" ","pages":"572-573"},"PeriodicalIF":3.3,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142564023","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-03-01Epub Date: 2024-04-30DOI: 10.1080/0142159X.2024.2345271
Sean Tackett, Yvonne Steinert, Susan Mirabal, Darcy A Reed, Cynthia R Whitehead, Scott M Wright
Background: All individuals and groups have blind spots that can create problems if unaddressed. The goal of this study was to examine blind spots in medical education from international perspectives.
Methods: From December 2022 to March 2023, we distributed an electronic survey through international networks of medical students, postgraduate trainees, and medical educators. Respondents named blind spots affecting their medical education system and then rated nine blind spot domains from a study of U.S. medical education along five-point Likert-type scales (1 = much less attention needed; 5 = much more attention needed). We tested for differences between blind spot ratings by respondent groups. We also analyzed the blind spots that respondents identified to determine those not previously described and performed content analysis on open-ended responses about blind spot domains.
Results: There were 356 respondents from 88 countries, including 127 (44%) educators, 80 (28%) medical students, and 33 (11%) postgraduate trainees. At least 80% of respondents rated each blind spot domain as needing 'more' or 'much more' attention; the highest was 88% for 'Patient perspectives and voices that are not heard, valued, or understood.' In analyses by gender, role in medical education, World Bank country income level, and region, a mean difference of 0.5 was seen in only five of the possible 279 statistical comparisons. Of 885 blind spots documented, new blind spot areas related to issues that crossed national boundaries (e.g. international standards) and the sufficiency of resources to support medical education. Comments about the nine blind spot domains illustrated that cultural, health system, and governmental elements influenced how blind spots are manifested across different settings.
Discussion: There may be general agreement throughout the world about blind spots in medical education that deserve more attention. This could establish a basis for coordinated international effort to allocate resources and tailor interventions that advance medical education.
{"title":"Blind spots in medical education - International perspectives.","authors":"Sean Tackett, Yvonne Steinert, Susan Mirabal, Darcy A Reed, Cynthia R Whitehead, Scott M Wright","doi":"10.1080/0142159X.2024.2345271","DOIUrl":"10.1080/0142159X.2024.2345271","url":null,"abstract":"<p><strong>Background: </strong>All individuals and groups have blind spots that can create problems if unaddressed. The goal of this study was to examine blind spots in medical education from international perspectives.</p><p><strong>Methods: </strong>From December 2022 to March 2023, we distributed an electronic survey through international networks of medical students, postgraduate trainees, and medical educators. Respondents named blind spots affecting their medical education system and then rated nine blind spot domains from a study of U.S. medical education along five-point Likert-type scales (1 = much less attention needed; 5 = much more attention needed). We tested for differences between blind spot ratings by respondent groups. We also analyzed the blind spots that respondents identified to determine those not previously described and performed content analysis on open-ended responses about blind spot domains.</p><p><strong>Results: </strong>There were 356 respondents from 88 countries, including 127 (44%) educators, 80 (28%) medical students, and 33 (11%) postgraduate trainees. At least 80% of respondents rated each blind spot domain as needing 'more' or 'much more' attention; the highest was 88% for 'Patient perspectives and voices that are not heard, valued, or understood.' In analyses by gender, role in medical education, World Bank country income level, and region, a mean difference of 0.5 was seen in only five of the possible 279 statistical comparisons. Of 885 blind spots documented, new blind spot areas related to issues that crossed national boundaries (e.g. international standards) and the sufficiency of resources to support medical education. Comments about the nine blind spot domains illustrated that cultural, health system, and governmental elements influenced how blind spots are manifested across different settings.</p><p><strong>Discussion: </strong>There may be general agreement throughout the world about blind spots in medical education that deserve more attention. This could establish a basis for coordinated international effort to allocate resources and tailor interventions that advance medical education.</p>","PeriodicalId":18643,"journal":{"name":"Medical Teacher","volume":" ","pages":"498-504"},"PeriodicalIF":3.3,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140858931","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-03-01Epub Date: 2024-05-31DOI: 10.1080/0142159X.2024.2356830
Da-Ya Yang, Xiao-Dong Zhuang, Jun-Xun Li, Jing-Zhou Jiang, Yue Guo, Xiao-Yu Zhang, Jun Liu, Wei Chen, Xin-Xue Liao, David C M Taylor
Background: It is unclear whether alternating placements during clinical clerkship, without an explicit emphasis on clinical competencies, would bring about optimal educational outcomes.
Methods: This is an explanatory sequential mixed-methods research. We enrolled a convenience sample of 41 eight-year programme medical students in Sun Yat-sen University who received alternating placements during clerkship. The effects of competence-based approach (n = 21) versus traditional approach (n = 20) to clerkship teaching were compared. In the quantitative phase, course satisfaction was measured via an online survey and academic performance was determined through final scores on summative assessment. Then, in the qualitative phase, students were invited for semi-structured interviews about their learning experiences, and the transcripts were used for thematic analysis.
Results: Quantitative findings showed that students in the study group rated high course satisfaction and performed significantly better in their final scores compared with those in the control group. Qualitative findings from thematic analysis showed that students were relatively neutral about their preference on placement models, but clearly perceived, capitalised, and appreciated that their competencies were being cultivated by an instructor who was regarded as a positive role model.
Conclusion: A competence-based approach to clerkship teaching resulted in better course satisfaction and academic performance, and was perceived, capitalised, and appreciated by students.
{"title":"Effects of a competence-based approach for clerkship teaching under alternating clinical placements: An explanatory sequential mixed-methods research.","authors":"Da-Ya Yang, Xiao-Dong Zhuang, Jun-Xun Li, Jing-Zhou Jiang, Yue Guo, Xiao-Yu Zhang, Jun Liu, Wei Chen, Xin-Xue Liao, David C M Taylor","doi":"10.1080/0142159X.2024.2356830","DOIUrl":"10.1080/0142159X.2024.2356830","url":null,"abstract":"<p><strong>Background: </strong>It is unclear whether alternating placements during clinical clerkship, without an explicit emphasis on clinical competencies, would bring about optimal educational outcomes.</p><p><strong>Methods: </strong>This is an explanatory sequential mixed-methods research. We enrolled a convenience sample of 41 eight-year programme medical students in Sun Yat-sen University who received alternating placements during clerkship. The effects of competence-based approach (<i>n</i> = 21) versus traditional approach (<i>n</i> = 20) to clerkship teaching were compared. In the quantitative phase, course satisfaction was measured <i>via</i> an online survey and academic performance was determined through final scores on summative assessment. Then, in the qualitative phase, students were invited for semi-structured interviews about their learning experiences, and the transcripts were used for thematic analysis.</p><p><strong>Results: </strong>Quantitative findings showed that students in the study group rated high course satisfaction and performed significantly better in their final scores compared with those in the control group. Qualitative findings from thematic analysis showed that students were relatively neutral about their preference on placement models, but clearly perceived, capitalised, and appreciated that their competencies were being cultivated by an instructor who was regarded as a positive role model.</p><p><strong>Conclusion: </strong>A competence-based approach to clerkship teaching resulted in better course satisfaction and academic performance, and was perceived, capitalised, and appreciated by students.</p>","PeriodicalId":18643,"journal":{"name":"Medical Teacher","volume":" ","pages":"541-549"},"PeriodicalIF":3.3,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141180133","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-03-01Epub Date: 2024-08-04DOI: 10.1080/0142159X.2024.2385678
Javier Alejandro Flores-Cohaila, Peter Garcia-Portocarrero, Deysi A Saldaña-Amaya, Brayan Miranda-Chavez, Cesar Copaja-Corzo
What is the educational challenge? The Medical Education Research Study Quality Instrument (MERSQI) is widely used to evaluate the quality of quantitative research in medical education. It has strong evidence of validity and is endorsed by guidelines. However, the manual appraisal process is time-consuming and resource-intensive, highlighting the need for more efficient methods. What are the proposed solutions? We propose to use ChatGPT to evaluate the quality of medical education research with the MERSQI and compare its scoring with those of human evaluators. What are the potential benefits to a broader global audience? Using ChatGPT to evaluate medical education research with the MERSQI can decrease the resources required for quality appraisal. This allows faster summaries of evidence, reducing the workload of researchers, editors, and educators. Furthermore, ChatGPTs' capability to extract supporting excerpts provides transparency and may have the potential for data extraction and training new medical education researchers. What are the next steps? We plan to continue evaluating medical education research with ChatGPT using the MERSQI and other instruments to determine its feasibility in this realm. Moreover, we plan to investigate which types of studies ChatGPT performs best in.
{"title":"Leveraging evaluation of quality on medical education research with ChatGPT.","authors":"Javier Alejandro Flores-Cohaila, Peter Garcia-Portocarrero, Deysi A Saldaña-Amaya, Brayan Miranda-Chavez, Cesar Copaja-Corzo","doi":"10.1080/0142159X.2024.2385678","DOIUrl":"10.1080/0142159X.2024.2385678","url":null,"abstract":"<p><p><b>What is the educational challenge?</b> The Medical Education Research Study Quality Instrument (MERSQI) is widely used to evaluate the quality of quantitative research in medical education. It has strong evidence of validity and is endorsed by guidelines. However, the manual appraisal process is time-consuming and resource-intensive, highlighting the need for more efficient methods. <b>What are the proposed solutions?</b> We propose to use ChatGPT to evaluate the quality of medical education research with the MERSQI and compare its scoring with those of human evaluators. <b>What are the potential benefits to a broader global audience?</b> Using ChatGPT to evaluate medical education research with the MERSQI can decrease the resources required for quality appraisal. This allows faster summaries of evidence, reducing the workload of researchers, editors, and educators. Furthermore, ChatGPTs' capability to extract supporting excerpts provides transparency and may have the potential for data extraction and training new medical education researchers. <b>What are the next steps?</b> We plan to continue evaluating medical education research with ChatGPT using the MERSQI and other instruments to determine its feasibility in this realm. Moreover, we plan to investigate which types of studies ChatGPT performs best in.</p>","PeriodicalId":18643,"journal":{"name":"Medical Teacher","volume":" ","pages":"401-403"},"PeriodicalIF":3.3,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141889772","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-03-01Epub Date: 2024-06-20DOI: 10.1080/0142159X.2024.2363486
Lilin Tong, Jennifer Wang, Srikar Rapaka, Priya S Garg
Introduction: Multiple-choice questions (MCQs) are frequently used for formative assessment in medical school but often lack sufficient answer explanations given time-restraints of faculty. Chat Generated Pre-trained Transformer (ChatGPT) has emerged as a potential student learning aid and faculty teaching tool. This study aims to evaluate ChatGPT's performance in answering and providing explanations for MCQs.
Method: Ninety-four faculty-generated MCQs were collected from the pre-clerkship curriculum at a US medical school. ChatGPT's accuracy in answering MCQ's were tracked on first attempt without an answer prompt (Pass 1) and after being given a prompt for the correct answer (Pass 2). Explanations provided by ChatGPT were compared with faculty-generated explanations, and a 3-point evaluation scale was used to assess accuracy and thoroughness compared to faculty-generated answers.
Results: On first attempt, ChatGPT demonstrated a 75% accuracy in correctly answering faculty-generated MCQs. Among correctly answered questions, 66.4% of ChatGPT's explanations matched faculty explanations, and 89.1% captured some key aspects without providing inaccurate information. The amount of inaccurately generated explanations increases significantly if the questions was not answered correctly on the first pass (2.7% if correct on first pass vs. 34.6% if incorrect on first pass, p < 0.001).
Conclusion: ChatGPT shows promise in assisting faculty and students with explanations for practice MCQ's but should be used with caution. Faculty should review explanations and supplement to ensure coverage of learning objectives. Students can benefit from ChatGPT for immediate feedback through explanations if ChatGPT answers the question correctly on the first try. If the question is answered incorrectly students should remain cautious of the explanation and seek clarification from instructors.
{"title":"Can ChatGPT generate practice question explanations for medical students, a new faculty teaching tool?","authors":"Lilin Tong, Jennifer Wang, Srikar Rapaka, Priya S Garg","doi":"10.1080/0142159X.2024.2363486","DOIUrl":"10.1080/0142159X.2024.2363486","url":null,"abstract":"<p><strong>Introduction: </strong>Multiple-choice questions (MCQs) are frequently used for formative assessment in medical school but often lack sufficient answer explanations given time-restraints of faculty. Chat Generated Pre-trained Transformer (ChatGPT) has emerged as a potential student learning aid and faculty teaching tool. This study aims to evaluate ChatGPT's performance in answering and providing explanations for MCQs.</p><p><strong>Method: </strong>Ninety-four faculty-generated MCQs were collected from the pre-clerkship curriculum at a US medical school. ChatGPT's accuracy in answering MCQ's were tracked on first attempt without an answer prompt (Pass 1) and after being given a prompt for the correct answer (Pass 2). Explanations provided by ChatGPT were compared with faculty-generated explanations, and a 3-point evaluation scale was used to assess accuracy and thoroughness compared to faculty-generated answers.</p><p><strong>Results: </strong>On first attempt, ChatGPT demonstrated a 75% accuracy in correctly answering faculty-generated MCQs. Among correctly answered questions, 66.4% of ChatGPT's explanations matched faculty explanations, and 89.1% captured some key aspects without providing inaccurate information. The amount of inaccurately generated explanations increases significantly if the questions was not answered correctly on the first pass (2.7% if correct on first pass vs. 34.6% if incorrect on first pass, <i>p</i> < 0.001).</p><p><strong>Conclusion: </strong>ChatGPT shows promise in assisting faculty and students with explanations for practice MCQ's but should be used with caution. Faculty should review explanations and supplement to ensure coverage of learning objectives. Students can benefit from ChatGPT for immediate feedback through explanations if ChatGPT answers the question correctly on the first try. If the question is answered incorrectly students should remain cautious of the explanation and seek clarification from instructors.</p>","PeriodicalId":18643,"journal":{"name":"Medical Teacher","volume":" ","pages":"560-564"},"PeriodicalIF":3.3,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141432268","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}