Pub Date : 2025-01-01Epub Date: 2025-10-10DOI: 10.3352/jeehp.2025.22.30
Behrad Vakili, Aadam Ahmad, Mahsa Zolfaghari
Purpose: Large language models (LLMs) such as ChatGPT, and their potential to support autonomous learning for licensing exams like the UK Medical Licensing Assessment (UKMLA), are of growing interest. However, empirical evaluations of artificial intelligence (AI) performance against the UKMLA standard remain limited.
Methods: We evaluated the performance of 2 recent ChatGPT versions, GPT-4o and o1-Pro, on a curated set of 374 UKMLA-style single-best-answer items spanning diverse medical specialties. Statistical comparisons using McNemar's test assessed the significance of differences between the 2 models. Specialties were analyzed to identify domain-specific variation. In addition, 20 image-based items were evaluated.
Results: GPT-4o achieved an accuracy of 88.8%, while o1-Pro achieved 93.0%. McNemar's test revealed a statistically significant difference in favor of o1-Pro. Across specialties, both models demonstrated excellent performance in surgery, psychiatry, and infectious diseases. Notable differences arose in dermatology, respiratory medicine, and imaging, where o1-Pro consistently outperformed GPT-4o. Nevertheless, isolated weaknesses in general practice were observed. The analysis of image-based items showed 75% accuracy for GPT-4o and 90% for o1-Pro (P=0.25).
Conclusion: ChatGPT shows strong potential as an adjunct learning tool for UKMLA preparation, with both models achieving scores above the calculated pass mark. This underscores the promise of advanced AI models in medical education. However, specialty-specific inconsistencies suggest AI tools should complement, rather than replace, traditional study methods.
{"title":"Performance of GPT-4o and o1-Pro on United Kingdom Medical Licensing Assessment-style items: a comparative study.","authors":"Behrad Vakili, Aadam Ahmad, Mahsa Zolfaghari","doi":"10.3352/jeehp.2025.22.30","DOIUrl":"10.3352/jeehp.2025.22.30","url":null,"abstract":"<p><strong>Purpose: </strong>Large language models (LLMs) such as ChatGPT, and their potential to support autonomous learning for licensing exams like the UK Medical Licensing Assessment (UKMLA), are of growing interest. However, empirical evaluations of artificial intelligence (AI) performance against the UKMLA standard remain limited.</p><p><strong>Methods: </strong>We evaluated the performance of 2 recent ChatGPT versions, GPT-4o and o1-Pro, on a curated set of 374 UKMLA-style single-best-answer items spanning diverse medical specialties. Statistical comparisons using McNemar's test assessed the significance of differences between the 2 models. Specialties were analyzed to identify domain-specific variation. In addition, 20 image-based items were evaluated.</p><p><strong>Results: </strong>GPT-4o achieved an accuracy of 88.8%, while o1-Pro achieved 93.0%. McNemar's test revealed a statistically significant difference in favor of o1-Pro. Across specialties, both models demonstrated excellent performance in surgery, psychiatry, and infectious diseases. Notable differences arose in dermatology, respiratory medicine, and imaging, where o1-Pro consistently outperformed GPT-4o. Nevertheless, isolated weaknesses in general practice were observed. The analysis of image-based items showed 75% accuracy for GPT-4o and 90% for o1-Pro (P=0.25).</p><p><strong>Conclusion: </strong>ChatGPT shows strong potential as an adjunct learning tool for UKMLA preparation, with both models achieving scores above the calculated pass mark. This underscores the promise of advanced AI models in medical education. However, specialty-specific inconsistencies suggest AI tools should complement, rather than replace, traditional study methods.</p>","PeriodicalId":46098,"journal":{"name":"Journal of Educational Evaluation for Health Professions","volume":"22 ","pages":"30"},"PeriodicalIF":3.7,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12688319/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145259626","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-01Epub Date: 2025-09-05DOI: 10.3352/jeehp.2025.22.23
Eun Jeong Ko, Tae Kyung Lee, Geum Hee Jeong
Purpose: This psychometric study aimed to compare the ability parameter estimates of generative artificial intelligence (AI) platforms with those of nursing students on a 50-item women's health nursing examination at Hallym University, Korea, using the Rasch model. It also sought to estimate item difficulty parameters and evaluate AI performance across varying difficulty levels.
Methods: The exam, consisting of 39 multiple-choice items and 11 true/false items, was administered to 111 fourth-year nursing students in June 2023. In December 2024, 6 generative AI platforms (GPT-4o, ChatGPT Free, Claude.ai, Clova X, Mistral.ai, Google Gemini) completed the same items. The responses were analyzed using the Rasch model to estimate the ability and difficulty parameters. Unidimensionality was verified by the Dimensionality Evaluation to Enumerate Contributing Traits (DETECT), and analyses were conducted using the R packages irtQ and TAM.
Results: The items satisfied unidimensionality (DETECT=-0.16). Item difficulty parameter estimates ranged from -3.87 to 1.96 logits (mean=-0.61), with a mean difficulty index of 0.79. Examinees' ability parameter estimates ranged from -0.71 to 3.14 logits (mean=1.17). GPT-4o, ChatGPT Free, and Claude.ai outperformed the median student ability (1.09 logits), scoring 2.68, 2.34, and 2.34, respectively, while Clova X, Mistral.ai, and Google Gemini exhibited lower scores (0.20, -0.12, 0.80). The test information curve peaked below θ=0, indicating suitability for examinees with low to average ability.
Conclusion: Advanced generative AI platforms approximated the performance of high-performing students, but outcomes varied. The Rasch model effectively evaluated AI competency, supporting its potential utility for future AI performance assessments in nursing education.
{"title":"Comparing generative artificial intelligence platforms and nursing student performance on a women's health nursing examination in Korea: a Rasch model approach.","authors":"Eun Jeong Ko, Tae Kyung Lee, Geum Hee Jeong","doi":"10.3352/jeehp.2025.22.23","DOIUrl":"10.3352/jeehp.2025.22.23","url":null,"abstract":"<p><strong>Purpose: </strong>This psychometric study aimed to compare the ability parameter estimates of generative artificial intelligence (AI) platforms with those of nursing students on a 50-item women's health nursing examination at Hallym University, Korea, using the Rasch model. It also sought to estimate item difficulty parameters and evaluate AI performance across varying difficulty levels.</p><p><strong>Methods: </strong>The exam, consisting of 39 multiple-choice items and 11 true/false items, was administered to 111 fourth-year nursing students in June 2023. In December 2024, 6 generative AI platforms (GPT-4o, ChatGPT Free, Claude.ai, Clova X, Mistral.ai, Google Gemini) completed the same items. The responses were analyzed using the Rasch model to estimate the ability and difficulty parameters. Unidimensionality was verified by the Dimensionality Evaluation to Enumerate Contributing Traits (DETECT), and analyses were conducted using the R packages irtQ and TAM.</p><p><strong>Results: </strong>The items satisfied unidimensionality (DETECT=-0.16). Item difficulty parameter estimates ranged from -3.87 to 1.96 logits (mean=-0.61), with a mean difficulty index of 0.79. Examinees' ability parameter estimates ranged from -0.71 to 3.14 logits (mean=1.17). GPT-4o, ChatGPT Free, and Claude.ai outperformed the median student ability (1.09 logits), scoring 2.68, 2.34, and 2.34, respectively, while Clova X, Mistral.ai, and Google Gemini exhibited lower scores (0.20, -0.12, 0.80). The test information curve peaked below θ=0, indicating suitability for examinees with low to average ability.</p><p><strong>Conclusion: </strong>Advanced generative AI platforms approximated the performance of high-performing students, but outcomes varied. The Rasch model effectively evaluated AI competency, supporting its potential utility for future AI performance assessments in nursing education.</p>","PeriodicalId":46098,"journal":{"name":"Journal of Educational Evaluation for Health Professions","volume":"22 ","pages":"23"},"PeriodicalIF":3.7,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12770907/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145151345","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Purpose: This study systematically evaluates and compares the performance of large language models (LLMs) in answering medical licensing examination questions. By conducting subgroup analyses based on language, question format, and model type, this meta-analysis aims to provide a comprehensive overview of LLM capabilities in medical education and clinical decision-making.
Methods: This systematic review, registered in PROSPERO and following PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) guidelines, searched MEDLINE (PubMed), Scopus, and Web of Science for relevant articles published up to February 1, 2025. The search strategy included Medical Subject Headings (MeSH) terms and keywords related to ("ChatGPT" OR "GPT" OR "LLM variants") AND ("medical licensing exam*" OR "medical exam*" OR "medical education" OR "radiology exam*"). Eligible studies evaluated LLM accuracy on medical licensing examination questions. Pooled accuracy was estimated using a random-effects model, with subgroup analyses by LLM type, language, and question format. Publication bias was assessed using Egger's regression test.
Results: This systematic review identified 2,404 studies. After removing duplicates and excluding irrelevant articles through title and abstract screening, 36 studies were included after full-text review. The pooled accuracy was 72% (95% confidence interval, 70.0% to 75.0%) with high heterogeneity (I2=99%, P<0.001). Among LLMs, GPT-4 achieved the highest accuracy (81%), followed by Bing (79%), Claude (74%), Gemini/Bard (70%), and GPT-3.5 (60%) (P=0.001). Performance differences across languages (range, 62% in Polish to 77% in German) were not statistically significant (P=0.170).
Conclusion: LLMs, particularly GPT-4, can match or exceed medical students' examination performance and may serve as supportive educational tools. However, due to variability and the risk of errors, they should be used cautiously as complements rather than replacements for traditional learning methods.
目的:本研究系统评价和比较大语言模型(llm)在回答医师执照考试问题中的表现。通过基于语言、问题格式和模型类型的分组分析,本荟萃分析旨在全面概述法学硕士在医学教育和临床决策方面的能力。方法:本系统综述在PROSPERO注册,并遵循PRISMA(系统综述和荟萃分析的首选报告项目)指南,检索MEDLINE (PubMed)、Scopus和Web of Science,查找截至2025年2月1日发表的相关文章。搜索策略包括与(“ChatGPT”或“GPT”或“LLM变体”)和(“医学执照考试*”或“医学考试*”或“医学教育”或“放射学考试*”)相关的医学主题标题(MeSH)术语和关键字。合格的研究评估了LLM在医疗执照考试问题上的准确性。使用随机效应模型估计汇总准确性,并根据LLM类型、语言和问题格式进行亚组分析。采用Egger回归检验评估发表偏倚。结果:本系统综述确定了2404项研究。在通过标题和摘要筛选去除重复和排除不相关的文章后,全文审查后纳入了36项研究。结论:法学硕士,特别是GPT-4,可以达到或超过医学生的考试成绩,可以作为辅助教育工具。然而,由于可变性和错误的风险,它们应该谨慎地作为传统学习方法的补充而不是替代。
{"title":"Performance of large language models in medical licensing examinations: a systematic review and meta-analysis.","authors":"Haniyeh Nouri, Abdollah Mahdavi, Ali Abedi, Alireza Mohammadnia, Mahnaz Hamedan, Masoud Amanzadeh","doi":"10.3352/jeehp.2025.22.36","DOIUrl":"10.3352/jeehp.2025.22.36","url":null,"abstract":"<p><strong>Purpose: </strong>This study systematically evaluates and compares the performance of large language models (LLMs) in answering medical licensing examination questions. By conducting subgroup analyses based on language, question format, and model type, this meta-analysis aims to provide a comprehensive overview of LLM capabilities in medical education and clinical decision-making.</p><p><strong>Methods: </strong>This systematic review, registered in PROSPERO and following PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) guidelines, searched MEDLINE (PubMed), Scopus, and Web of Science for relevant articles published up to February 1, 2025. The search strategy included Medical Subject Headings (MeSH) terms and keywords related to (\"ChatGPT\" OR \"GPT\" OR \"LLM variants\") AND (\"medical licensing exam*\" OR \"medical exam*\" OR \"medical education\" OR \"radiology exam*\"). Eligible studies evaluated LLM accuracy on medical licensing examination questions. Pooled accuracy was estimated using a random-effects model, with subgroup analyses by LLM type, language, and question format. Publication bias was assessed using Egger's regression test.</p><p><strong>Results: </strong>This systematic review identified 2,404 studies. After removing duplicates and excluding irrelevant articles through title and abstract screening, 36 studies were included after full-text review. The pooled accuracy was 72% (95% confidence interval, 70.0% to 75.0%) with high heterogeneity (I2=99%, P<0.001). Among LLMs, GPT-4 achieved the highest accuracy (81%), followed by Bing (79%), Claude (74%), Gemini/Bard (70%), and GPT-3.5 (60%) (P=0.001). Performance differences across languages (range, 62% in Polish to 77% in German) were not statistically significant (P=0.170).</p><p><strong>Conclusion: </strong>LLMs, particularly GPT-4, can match or exceed medical students' examination performance and may serve as supportive educational tools. However, due to variability and the risk of errors, they should be used cautiously as complements rather than replacements for traditional learning methods.</p>","PeriodicalId":46098,"journal":{"name":"Journal of Educational Evaluation for Health Professions","volume":"22 ","pages":"36"},"PeriodicalIF":3.7,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12976628/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145542995","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-01Epub Date: 2025-09-26DOI: 10.3352/jeehp.2025.22.26
Sean Gallivan, Jamie Bayliss
Purpose: The aim of this study was to assess the validity of the Student and Clinical Instructor Performance Instrument (SCIPAI), a novel formative tool used in physical therapist education to assess student and clinical instructor (CI) performance throughout clinical education experiences (CEEs). The researchers hypothesized that the SCIPAI would demonstrate concurrent, predictive, and construct validity while offering additional contemporary validity evidence.
Methods: This quasi-experimental, time-series study had 811 student-CI pairs complete 2 SCIPAIs before after CEE midpoint, and an endpoint Clinical Performance Instrument (CPI) during beginning to terminal CEEs in a 1-year period. Spearman rank correlation analyses used final SCIPAI and CPI like-item scores to assess concurrent validity; and earlier SCIPAI and final CPI like-item scores to assess predictive validity. Construct validity was assessed via progression of student and CI performance scores within CEEs using Wilcoxon signed-rank testing. No randomization/grouping of subjects occurred.
Results: Moderate correlation existed between like final SCIPAI and CPI items (P<0.005) and between some like items of earlier SCIPAIs and final CPIs (P<0.005). Student performance scores demonstrated progress from SCIPAIs 1 to 4 within CEEs (P<0.005). While a greater number of CIs demonstrated progression rather than regression in performance from SCIPAI 1 to SCIPAI 4, the greater magnitude of decreases in CI performance contributed to an aggregate ratings decrease of CI performance (P<0.005).
Conclusion: The SCIPAI demonstrates concurrent, predictive, and construct validity when used by students and CIs to rate student performance at regular points throughout clinical education experiences.
{"title":"Validity of the formative physical therapy Student and Clinical Instructor Performance Instrument in the United States: a quasi-experimental, time-series study.","authors":"Sean Gallivan, Jamie Bayliss","doi":"10.3352/jeehp.2025.22.26","DOIUrl":"10.3352/jeehp.2025.22.26","url":null,"abstract":"<p><strong>Purpose: </strong>The aim of this study was to assess the validity of the Student and Clinical Instructor Performance Instrument (SCIPAI), a novel formative tool used in physical therapist education to assess student and clinical instructor (CI) performance throughout clinical education experiences (CEEs). The researchers hypothesized that the SCIPAI would demonstrate concurrent, predictive, and construct validity while offering additional contemporary validity evidence.</p><p><strong>Methods: </strong>This quasi-experimental, time-series study had 811 student-CI pairs complete 2 SCIPAIs before after CEE midpoint, and an endpoint Clinical Performance Instrument (CPI) during beginning to terminal CEEs in a 1-year period. Spearman rank correlation analyses used final SCIPAI and CPI like-item scores to assess concurrent validity; and earlier SCIPAI and final CPI like-item scores to assess predictive validity. Construct validity was assessed via progression of student and CI performance scores within CEEs using Wilcoxon signed-rank testing. No randomization/grouping of subjects occurred.</p><p><strong>Results: </strong>Moderate correlation existed between like final SCIPAI and CPI items (P<0.005) and between some like items of earlier SCIPAIs and final CPIs (P<0.005). Student performance scores demonstrated progress from SCIPAIs 1 to 4 within CEEs (P<0.005). While a greater number of CIs demonstrated progression rather than regression in performance from SCIPAI 1 to SCIPAI 4, the greater magnitude of decreases in CI performance contributed to an aggregate ratings decrease of CI performance (P<0.005).</p><p><strong>Conclusion: </strong>The SCIPAI demonstrates concurrent, predictive, and construct validity when used by students and CIs to rate student performance at regular points throughout clinical education experiences.</p>","PeriodicalId":46098,"journal":{"name":"Journal of Educational Evaluation for Health Professions","volume":"22 ","pages":"26"},"PeriodicalIF":3.7,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12688320/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145150958","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-01Epub Date: 2025-06-10DOI: 10.3352/jeehp.2025.22.18
Yulim Kang, Hae Won Kim
Purpose: This study investigated the longitudinal relationships between performance on 3 examinations assessing medical knowledge and clinical skills among Korean medical students in the clinical phase. This study addressed the stability of each examination score and the interrelationships among examinations over time.
Methods: A retrospective longitudinal study was conducted at Yonsei University College of Medicine in Korea with a cohort of 112 medical students over 2 years. The students were in their third year in 2022 and progressed to the fourth year in 2023. We obtained comprehensive clinical science examination (CCSE) and progress test (PT) scores 3 times (T1-T3), and clinical performance examination (CPX) scores twice (T1 and T2). Autoregressive cross-lagged models were fitted to analyze their relationships.
Results: For each of the 3 examinations, the score at 1 time point predicted the subsequent score. Regarding cross-lagged effects, the CCSE at T1 predicted PT at T2 (β=0.472, P<0.001) and CCSE at T2 predicted PT at T3 (β=0.527, P<0.001). The CPX at T1 predicted the CCSE at T2 (β=0.163, P=0.006), and the CPX at T2 predicted the CCSE at T3 (β=0.154, P=0.006). The PT at T1 predicted the CPX at T2 (β=0.273, P=0.006).
Conclusion: The study identified each examination's stability and the complexity of the longitudinal relationships between them. These findings may help predict medical students' performance on subsequent examinations, potentially informing the provision of necessary student support.
{"title":"Longitudinal relationships between Korean medical students' academic performance in medical knowledge and clinical performance examinations: a retrospective longitudinal study.","authors":"Yulim Kang, Hae Won Kim","doi":"10.3352/jeehp.2025.22.18","DOIUrl":"10.3352/jeehp.2025.22.18","url":null,"abstract":"<p><strong>Purpose: </strong>This study investigated the longitudinal relationships between performance on 3 examinations assessing medical knowledge and clinical skills among Korean medical students in the clinical phase. This study addressed the stability of each examination score and the interrelationships among examinations over time.</p><p><strong>Methods: </strong>A retrospective longitudinal study was conducted at Yonsei University College of Medicine in Korea with a cohort of 112 medical students over 2 years. The students were in their third year in 2022 and progressed to the fourth year in 2023. We obtained comprehensive clinical science examination (CCSE) and progress test (PT) scores 3 times (T1-T3), and clinical performance examination (CPX) scores twice (T1 and T2). Autoregressive cross-lagged models were fitted to analyze their relationships.</p><p><strong>Results: </strong>For each of the 3 examinations, the score at 1 time point predicted the subsequent score. Regarding cross-lagged effects, the CCSE at T1 predicted PT at T2 (β=0.472, P<0.001) and CCSE at T2 predicted PT at T3 (β=0.527, P<0.001). The CPX at T1 predicted the CCSE at T2 (β=0.163, P=0.006), and the CPX at T2 predicted the CCSE at T3 (β=0.154, P=0.006). The PT at T1 predicted the CPX at T2 (β=0.273, P=0.006).</p><p><strong>Conclusion: </strong>The study identified each examination's stability and the complexity of the longitudinal relationships between them. These findings may help predict medical students' performance on subsequent examinations, potentially informing the provision of necessary student support.</p>","PeriodicalId":46098,"journal":{"name":"Journal of Educational Evaluation for Health Professions","volume":"22 ","pages":"18"},"PeriodicalIF":3.7,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12365683/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144267588","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-01Epub Date: 2025-01-20DOI: 10.3352/jeehp.2025.22.6
Masoud Dauda, Swabaha Aidarus Yusuph, Harouni Yasini, Issa Mmbaga, Perpetua Mwambinngu, Hansol Park, Gyeongbae Seo, Kyoung Kyun Oh
Purpose: This study evaluated the Dr LEE Jong-wook Fellowship Program’s impact on Tanzania’s health workforce, focusing on relevance, effectiveness, efficiency, impact, and sustainability in addressing healthcare gaps.
Methods: A mixed-methods research design was employed. Data were collected from 97 out of 140 alumni through an online survey, 35 in-depth interviews, and one focus group discussion. The study was conducted from November to December 2023 and included alumni from 2009 to 2022. Measurement instruments included structured questionnaires for quantitative data and semi-structured guides for qualitative data. Quantitative analysis involved descriptive and inferential statistics (Spearman’s rank correlation, non-parametric tests) using Python ver. 3.11.0 and Stata ver. 14.0. Thematic analysis was employed to analyze qualitative data using NVivo ver. 12.0.
Results: Findings indicated high relevance (mean=91.6, standard deviation [SD]=8.6), effectiveness (mean=86.1, SD=11.2), efficiency (mean=82.7, SD=10.2), and impact (mean=87.7, SD=9.9), with improved skills, confidence, and institutional service quality. However, sustainability had a lower score (mean=58.0, SD=11.1), reflecting challenges in follow-up support and resource allocation. Effectiveness strongly correlated with impact (ρ=0.746, P<0.001). The qualitative findings revealed that participants valued tailored training but highlighted barriers, such as language challenges and insufficient practical components. Alumni-led initiatives contributed to knowledge sharing, but limited resources constrained sustainability.
Conclusion: The Fellowship Program enhanced Tanzania’s health workforce capacity, but it requires localized curricula and strengthened alumni networks for sustainability. These findings provide actionable insights for improving similar programs globally, confirming the hypothesis that tailored training positively influences workforce and institutional outcomes.
{"title":"Empirical effect of the Dr LEE Jong-wook Fellowship Program to empower sustainable change for the health workforce in Tanzania: a mixed-methods study","authors":"Masoud Dauda, Swabaha Aidarus Yusuph, Harouni Yasini, Issa Mmbaga, Perpetua Mwambinngu, Hansol Park, Gyeongbae Seo, Kyoung Kyun Oh","doi":"10.3352/jeehp.2025.22.6","DOIUrl":"10.3352/jeehp.2025.22.6","url":null,"abstract":"<p><strong>Purpose: </strong>This study evaluated the Dr LEE Jong-wook Fellowship Program’s impact on Tanzania’s health workforce, focusing on relevance, effectiveness, efficiency, impact, and sustainability in addressing healthcare gaps.</p><p><strong>Methods: </strong>A mixed-methods research design was employed. Data were collected from 97 out of 140 alumni through an online survey, 35 in-depth interviews, and one focus group discussion. The study was conducted from November to December 2023 and included alumni from 2009 to 2022. Measurement instruments included structured questionnaires for quantitative data and semi-structured guides for qualitative data. Quantitative analysis involved descriptive and inferential statistics (Spearman’s rank correlation, non-parametric tests) using Python ver. 3.11.0 and Stata ver. 14.0. Thematic analysis was employed to analyze qualitative data using NVivo ver. 12.0.</p><p><strong>Results: </strong>Findings indicated high relevance (mean=91.6, standard deviation [SD]=8.6), effectiveness (mean=86.1, SD=11.2), efficiency (mean=82.7, SD=10.2), and impact (mean=87.7, SD=9.9), with improved skills, confidence, and institutional service quality. However, sustainability had a lower score (mean=58.0, SD=11.1), reflecting challenges in follow-up support and resource allocation. Effectiveness strongly correlated with impact (ρ=0.746, P<0.001). The qualitative findings revealed that participants valued tailored training but highlighted barriers, such as language challenges and insufficient practical components. Alumni-led initiatives contributed to knowledge sharing, but limited resources constrained sustainability.</p><p><strong>Conclusion: </strong>The Fellowship Program enhanced Tanzania’s health workforce capacity, but it requires localized curricula and strengthened alumni networks for sustainability. These findings provide actionable insights for improving similar programs globally, confirming the hypothesis that tailored training positively influences workforce and institutional outcomes.</p>","PeriodicalId":46098,"journal":{"name":"Journal of Educational Evaluation for Health Professions","volume":"22 ","pages":"6"},"PeriodicalIF":9.3,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12003955/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143013022","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-01Epub Date: 2025-01-14DOI: 10.3352/jeehp.2025.22.3
Stella Anna Bult, Thomas van Gulik
Purpose: This research presents an experimental study using validated questionnaires to quantitatively assess the outcomes of art-based observational training in medical students, residents, and specialists. The study tested the hypothesis that art-based observational training would lead to measurable effects on judgement skills (tolerance of ambiguity) and empathy in medical students and doctors.
Methods: An experimental cohort study with pre- and post-intervention assessments was conducted using validated questionnaires and qualitative evaluation forms to examine the outcomes of art-based observational training in medical students and doctors. Between December 2023 and June 2024, 15 art courses were conducted in the Rijksmuseum in Amsterdam. Participants were assessed on empathy using the Jefferson Scale of Empathy (JSE) and tolerance of ambiguity using the Tolerance of Ambiguity in Medical Students and Doctors (TAMSAD) scale.
Results: In total, 91 participants were included; 29 participants completed the JSE and 62 completed the TAMSAD scales. The results showed statistically significant post-test increases for mean JSE and TAMSAD scores (3.71 points for the JSE, ranging from 20 to 140, and 1.86 points for the TAMSAD, ranging from 0 to 100). The qualitative findings were predominantly positive.
Conclusion: The results suggest that incorporating art-based observational training in medical education improves empathy and tolerance of ambiguity. This study highlights the importance of art-based observational training in medical education in the professional development of medical students and doctors.
{"title":"Empathy and tolerance of ambiguity in medical students and doctors participating in art-based observational training at the Rijksmuseum in Amsterdam, the Netherlands: a before-and-after study","authors":"Stella Anna Bult, Thomas van Gulik","doi":"10.3352/jeehp.2025.22.3","DOIUrl":"10.3352/jeehp.2025.22.3","url":null,"abstract":"<p><strong>Purpose: </strong>This research presents an experimental study using validated questionnaires to quantitatively assess the outcomes of art-based observational training in medical students, residents, and specialists. The study tested the hypothesis that art-based observational training would lead to measurable effects on judgement skills (tolerance of ambiguity) and empathy in medical students and doctors.</p><p><strong>Methods: </strong>An experimental cohort study with pre- and post-intervention assessments was conducted using validated questionnaires and qualitative evaluation forms to examine the outcomes of art-based observational training in medical students and doctors. Between December 2023 and June 2024, 15 art courses were conducted in the Rijksmuseum in Amsterdam. Participants were assessed on empathy using the Jefferson Scale of Empathy (JSE) and tolerance of ambiguity using the Tolerance of Ambiguity in Medical Students and Doctors (TAMSAD) scale.</p><p><strong>Results: </strong>In total, 91 participants were included; 29 participants completed the JSE and 62 completed the TAMSAD scales. The results showed statistically significant post-test increases for mean JSE and TAMSAD scores (3.71 points for the JSE, ranging from 20 to 140, and 1.86 points for the TAMSAD, ranging from 0 to 100). The qualitative findings were predominantly positive.</p><p><strong>Conclusion: </strong>The results suggest that incorporating art-based observational training in medical education improves empathy and tolerance of ambiguity. This study highlights the importance of art-based observational training in medical education in the professional development of medical students and doctors.</p>","PeriodicalId":46098,"journal":{"name":"Journal of Educational Evaluation for Health Professions","volume":"22 ","pages":"3"},"PeriodicalIF":9.3,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11880821/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142980319","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-01Epub Date: 2025-07-28DOI: 10.3352/jeehp.2025.22.20
Wuan Shuen Yap, Pui San Saw, Li Ling Yeap, Shaun Wen Huey Lee, Wei Jin Wong, Ronald Fook Seng Lee
Purpose: Manual grading is time-consuming and prone to inconsistencies, prompting the exploration of generative artificial intelligence tools such as GPT-4 to enhance efficiency and reliability. This study investigated GPT-4's potential in grading pharmacy students' exam responses, focusing on the impact of optimized prompts. Specifically, it evaluated the alignment between GPT-4 and human raters, assessed GPT-4's consistency over time, and determined its error rates in grading pharmacy students' exam responses.
Methods: We conducted a comparative study using past exam responses graded by university-trained raters and by GPT-4. Responses were randomized before evaluation by GPT-4, accessed via a Plus account between April and September 2024. Prompt optimization was performed on 16 responses, followed by evaluation of 3 prompt delivery methods. We then applied the optimized approach across 4 item types. Intraclass correlation coefficients and error analyses were used to assess consistency and agreement between GPT-4 and human ratings.
Results: GPT-4's ratings aligned reasonably well with human raters, demonstrating moderate to excellent reliability (intraclass correlation coefficient=0.617-0.933), depending on item type and the optimized prompt. When stratified by grade bands, GPT-4 was less consistent in marking high-scoring responses (Z=-5.71-4.62, P<0.001). Overall, despite achieving substantial alignment with human raters in many cases, discrepancies across item types and a tendency to commit basic errors necessitate continued educator involvement to ensure grading accuracy.
Conclusion: With optimized prompts, GPT-4 shows promise as a supportive tool for grading pharmacy students' exam responses, particularly for objective tasks. However, its limitations-including errors and variability in grading high-scoring responses-require ongoing human oversight. Future research should explore advanced generative artificial intelligence models and broader assessment formats to further enhance grading reliability.
{"title":"Comparison between GPT-4 and human raters in grading pharmacy students' exam responses in Malaysia: a cross-sectional study.","authors":"Wuan Shuen Yap, Pui San Saw, Li Ling Yeap, Shaun Wen Huey Lee, Wei Jin Wong, Ronald Fook Seng Lee","doi":"10.3352/jeehp.2025.22.20","DOIUrl":"https://doi.org/10.3352/jeehp.2025.22.20","url":null,"abstract":"<p><strong>Purpose: </strong>Manual grading is time-consuming and prone to inconsistencies, prompting the exploration of generative artificial intelligence tools such as GPT-4 to enhance efficiency and reliability. This study investigated GPT-4's potential in grading pharmacy students' exam responses, focusing on the impact of optimized prompts. Specifically, it evaluated the alignment between GPT-4 and human raters, assessed GPT-4's consistency over time, and determined its error rates in grading pharmacy students' exam responses.</p><p><strong>Methods: </strong>We conducted a comparative study using past exam responses graded by university-trained raters and by GPT-4. Responses were randomized before evaluation by GPT-4, accessed via a Plus account between April and September 2024. Prompt optimization was performed on 16 responses, followed by evaluation of 3 prompt delivery methods. We then applied the optimized approach across 4 item types. Intraclass correlation coefficients and error analyses were used to assess consistency and agreement between GPT-4 and human ratings.</p><p><strong>Results: </strong>GPT-4's ratings aligned reasonably well with human raters, demonstrating moderate to excellent reliability (intraclass correlation coefficient=0.617-0.933), depending on item type and the optimized prompt. When stratified by grade bands, GPT-4 was less consistent in marking high-scoring responses (Z=-5.71-4.62, P<0.001). Overall, despite achieving substantial alignment with human raters in many cases, discrepancies across item types and a tendency to commit basic errors necessitate continued educator involvement to ensure grading accuracy.</p><p><strong>Conclusion: </strong>With optimized prompts, GPT-4 shows promise as a supportive tool for grading pharmacy students' exam responses, particularly for objective tasks. However, its limitations-including errors and variability in grading high-scoring responses-require ongoing human oversight. Future research should explore advanced generative artificial intelligence models and broader assessment formats to further enhance grading reliability.</p>","PeriodicalId":46098,"journal":{"name":"Journal of Educational Evaluation for Health Professions","volume":"22 ","pages":"20"},"PeriodicalIF":3.7,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145151398","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-01Epub Date: 2025-05-12DOI: 10.3352/jeehp.2025.22.15
Sofia Barlocco De La Vega, Evelyne Guerif-Dubreucq, Jebrane Bouaoud, Myriam Awad, Léonard Mathon, Agathe Beauvais, Thomas Olivier, Pierre-Clément Thiébaud, Anne-Laure Philippon
Purpose: To compare the effectiveness of mixed reality with traditional manikin-based simulation in basic life support (BLS) training, testing the hypothesis that mixed reality is non-inferior to manikin-based simulation.
Methods: A non-inferiority randomized controlled trial was conducted. Third-year medical students were randomized into 2 groups. The mixed reality group received 32 minutes of individual training using a virtual reality headset and a torso for chest compressions (CC). The manikin group participated in 2 hours of group training consisting of theoretical and practical sessions using a low-fidelity manikin. The primary outcome was the overall BLS performance score, assessed at 1 month through a standardized BLS scenario using a 10-item assessment scale. The quality of CC, student satisfaction, and confidence levels were secondary outcomes and assessed through superiority analyses.
Results: Data from 155 participants were analyzed, with 84 in the mixed reality group and 71 in the manikin group. The mean overall BLS performance score was 6.4 (mixed reality) vs. 6.5 (manikin), (mean difference, -0.1; 95% confidence interval [CI], -0.45 to +∞). CC depth was greater in the manikin group (50.3 mm vs. 46.6 mm; mean difference, -3.7 mm; 95% CI, -6.5 to -0.9), with 61.2% achieving optimal depth compared to 43.8% in the mixed reality group (mean difference, 17.4%; 95% CI, -29.3 to -5.5). Satisfaction was higher in the mixed reality group (4.9/5 vs. 4.7/5 in the manikin group; difference, 0.2; 95% CI, 0.07 to 0.33), as was confidence in performing BLS (3.9/5 vs. 3.6/5; difference, 0.3; 95% CI, 0.11 to 0.58). No other significant differences were observed for secondary outcomes.
Conclusion: Mixed reality is non-inferior to manikin simulation in terms of overall BLS performance score assessed at 1 month.
目的:比较混合现实与传统基于人体模型的模拟在基本生命支持(BLS)训练中的有效性,验证混合现实不劣于基于人体模型的模拟的假设。方法:采用非劣效性随机对照试验。三年级医学生随机分为两组。混合现实组接受了32分钟的个人训练,使用虚拟现实耳机和躯干胸部按压(CC)。人体模型组使用低保真度的人体模型进行了2小时的理论和实践训练。主要结果是总体BLS表现得分,在1个月时通过标准化的BLS情景使用10项评估量表进行评估。CC的质量、学生满意度和信心水平是次要结果,并通过优势分析进行评估。结果:共分析了155名参与者的数据,其中混合现实组84人,人体模型组71人。BLS的平均总分为6.4分(混合现实)vs. 6.5分(人体模型),(平均差-0.1;95%置信区间[CI], -0.45至+∞)。假人组CC深度更大(50.3 mm vs 46.6 mm;平均差值-3.7 mm;95% CI, -6.5至-0.9),61.2%达到最佳深度,而混合现实组为43.8%(平均差为17.4%;95% CI, -29.3至-5.5)。混合现实组满意度更高(4.9/5 vs. 4.7/5);差异,0.2;95% CI, 0.07至0.33),执行BLS的信心也是如此(3.9/5 vs. 3.6/5;差异,0.3;95% CI, 0.11 ~ 0.58)。在次要结果方面没有观察到其他显著差异。结论:混合现实在1个月的综合BLS性能评分方面不低于人体模拟。
{"title":"Mixed reality versus manikins in basic life support simulation-based training for medical students in France: the mixed reality non-inferiority randomized controlled trial.","authors":"Sofia Barlocco De La Vega, Evelyne Guerif-Dubreucq, Jebrane Bouaoud, Myriam Awad, Léonard Mathon, Agathe Beauvais, Thomas Olivier, Pierre-Clément Thiébaud, Anne-Laure Philippon","doi":"10.3352/jeehp.2025.22.15","DOIUrl":"10.3352/jeehp.2025.22.15","url":null,"abstract":"<p><strong>Purpose: </strong>To compare the effectiveness of mixed reality with traditional manikin-based simulation in basic life support (BLS) training, testing the hypothesis that mixed reality is non-inferior to manikin-based simulation.</p><p><strong>Methods: </strong>A non-inferiority randomized controlled trial was conducted. Third-year medical students were randomized into 2 groups. The mixed reality group received 32 minutes of individual training using a virtual reality headset and a torso for chest compressions (CC). The manikin group participated in 2 hours of group training consisting of theoretical and practical sessions using a low-fidelity manikin. The primary outcome was the overall BLS performance score, assessed at 1 month through a standardized BLS scenario using a 10-item assessment scale. The quality of CC, student satisfaction, and confidence levels were secondary outcomes and assessed through superiority analyses.</p><p><strong>Results: </strong>Data from 155 participants were analyzed, with 84 in the mixed reality group and 71 in the manikin group. The mean overall BLS performance score was 6.4 (mixed reality) vs. 6.5 (manikin), (mean difference, -0.1; 95% confidence interval [CI], -0.45 to +∞). CC depth was greater in the manikin group (50.3 mm vs. 46.6 mm; mean difference, -3.7 mm; 95% CI, -6.5 to -0.9), with 61.2% achieving optimal depth compared to 43.8% in the mixed reality group (mean difference, 17.4%; 95% CI, -29.3 to -5.5). Satisfaction was higher in the mixed reality group (4.9/5 vs. 4.7/5 in the manikin group; difference, 0.2; 95% CI, 0.07 to 0.33), as was confidence in performing BLS (3.9/5 vs. 3.6/5; difference, 0.3; 95% CI, 0.11 to 0.58). No other significant differences were observed for secondary outcomes.</p><p><strong>Conclusion: </strong>Mixed reality is non-inferior to manikin simulation in terms of overall BLS performance score assessed at 1 month.</p>","PeriodicalId":46098,"journal":{"name":"Journal of Educational Evaluation for Health Professions","volume":"22 ","pages":"15"},"PeriodicalIF":3.7,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144040345","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-01Epub Date: 2025-10-10DOI: 10.3352/jeehp.2025.22.31
Dogyeong Kim, Jeongwook Choi, Dong Gi Seo
Purpose: This study examined the impact of differential item functioning (DIF) on ability estimation in a computerized adaptive testing (CAT) environment using real response data from the 2017 Korean Medical Licensing Examination (KMLE). We hypothesized that excluding gender-based DIF items would improve estimation accuracy, particularly for examinees at the extremes of the ability scale.
Methods: The study was conducted in 2 steps: (1) DIF detection and (2) post-hoc simulation. The analysis used data from 3,259 examinees who completed all 360 dichotomous items. Gender-based DIF was detected with the residual-based DIF method (reference group: males; focal group: females). Two CAT conditions (all items vs. DIF-excluded) were compared against a "true θ" estimated from a fixed-form test of 264 non-DIF items. Accuracy was evaluated using bias, root mean square error (RMSE), and correlation with true θ.
Results: In the CAT condition excluding DIF items, accuracy improved, with RMSE reduced and correlation with true θ increased. However, bias was slightly larger in magnitude. Gender-specific analyses showed that DIF removal reduced the underestimation of female ability but increased the underestimation of male ability, yielding estimates that were fairer across genders. When DIF items were included, estimation errors were more pronounced at both low and high ability levels.
Conclusion: Managing DIF in CAT-based high-stakes examinations can enhance fairness and precision. Using real examinee data, this study provides practical evidence of the implications of DIF for CAT-based measurement and supports fairness-oriented test design.
{"title":"The impact of differential item functioning on ability estimation using the Korean Medical Licensing Examination with computerized adaptive testing: a post-hoc simulation study.","authors":"Dogyeong Kim, Jeongwook Choi, Dong Gi Seo","doi":"10.3352/jeehp.2025.22.31","DOIUrl":"https://doi.org/10.3352/jeehp.2025.22.31","url":null,"abstract":"<p><strong>Purpose: </strong>This study examined the impact of differential item functioning (DIF) on ability estimation in a computerized adaptive testing (CAT) environment using real response data from the 2017 Korean Medical Licensing Examination (KMLE). We hypothesized that excluding gender-based DIF items would improve estimation accuracy, particularly for examinees at the extremes of the ability scale.</p><p><strong>Methods: </strong>The study was conducted in 2 steps: (1) DIF detection and (2) post-hoc simulation. The analysis used data from 3,259 examinees who completed all 360 dichotomous items. Gender-based DIF was detected with the residual-based DIF method (reference group: males; focal group: females). Two CAT conditions (all items vs. DIF-excluded) were compared against a \"true θ\" estimated from a fixed-form test of 264 non-DIF items. Accuracy was evaluated using bias, root mean square error (RMSE), and correlation with true θ.</p><p><strong>Results: </strong>In the CAT condition excluding DIF items, accuracy improved, with RMSE reduced and correlation with true θ increased. However, bias was slightly larger in magnitude. Gender-specific analyses showed that DIF removal reduced the underestimation of female ability but increased the underestimation of male ability, yielding estimates that were fairer across genders. When DIF items were included, estimation errors were more pronounced at both low and high ability levels.</p><p><strong>Conclusion: </strong>Managing DIF in CAT-based high-stakes examinations can enhance fairness and precision. Using real examinee data, this study provides practical evidence of the implications of DIF for CAT-based measurement and supports fairness-oriented test design.</p>","PeriodicalId":46098,"journal":{"name":"Journal of Educational Evaluation for Health Professions","volume":"22 ","pages":"31"},"PeriodicalIF":3.7,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146047194","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}