首页 > 最新文献

Journal of Educational Evaluation for Health Professions最新文献

英文 中文
Performance of GPT-4o and o1-Pro on United Kingdom Medical Licensing Assessment-style items: a comparative study. gpt - 40和o1-Pro在英国医疗执照评估类项目中的表现:比较研究
IF 3.7 Q1 EDUCATION, SCIENTIFIC DISCIPLINES Pub Date : 2025-01-01 Epub Date: 2025-10-10 DOI: 10.3352/jeehp.2025.22.30
Behrad Vakili, Aadam Ahmad, Mahsa Zolfaghari

Purpose: Large language models (LLMs) such as ChatGPT, and their potential to support autonomous learning for licensing exams like the UK Medical Licensing Assessment (UKMLA), are of growing interest. However, empirical evaluations of artificial intelligence (AI) performance against the UKMLA standard remain limited.

Methods: We evaluated the performance of 2 recent ChatGPT versions, GPT-4o and o1-Pro, on a curated set of 374 UKMLA-style single-best-answer items spanning diverse medical specialties. Statistical comparisons using McNemar's test assessed the significance of differences between the 2 models. Specialties were analyzed to identify domain-specific variation. In addition, 20 image-based items were evaluated.

Results: GPT-4o achieved an accuracy of 88.8%, while o1-Pro achieved 93.0%. McNemar's test revealed a statistically significant difference in favor of o1-Pro. Across specialties, both models demonstrated excellent performance in surgery, psychiatry, and infectious diseases. Notable differences arose in dermatology, respiratory medicine, and imaging, where o1-Pro consistently outperformed GPT-4o. Nevertheless, isolated weaknesses in general practice were observed. The analysis of image-based items showed 75% accuracy for GPT-4o and 90% for o1-Pro (P=0.25).

Conclusion: ChatGPT shows strong potential as an adjunct learning tool for UKMLA preparation, with both models achieving scores above the calculated pass mark. This underscores the promise of advanced AI models in medical education. However, specialty-specific inconsistencies suggest AI tools should complement, rather than replace, traditional study methods.

目的:像ChatGPT这样的大型语言模型(llm),以及它们支持像英国医疗许可评估(UKMLA)这样的许可考试自主学习的潜力,正受到越来越多的关注。然而,针对UKMLA标准的人工智能(AI)性能的实证评估仍然有限。方法:我们评估了两个最新的ChatGPT版本,gpt - 40和o1-Pro,在374个ukmla风格的单一最佳答案项目上的性能,这些项目涵盖了不同的医学专业。使用McNemar检验进行统计比较,评估两个模型之间差异的显著性。分析特征以确定特定领域的变化。此外,还评估了20个基于图像的项目。结果:gpt - 40的准确率为88.8%,o1-Pro的准确率为93.0%。McNemar的测试显示了统计学上显著的差异,有利于o1-Pro。在专业方面,这两种模型在外科、精神病学和传染病方面表现出色。在皮肤病学、呼吸医学和影像学方面出现了显著的差异,在这些方面,o1-Pro的表现始终优于gpt - 40。然而,观察到一般做法中个别的弱点。基于图像的项目分析显示gpt - 40的准确率为75%,01 - pro的准确率为90% (P=0.25)。结论:ChatGPT作为UKMLA准备的辅助学习工具具有很强的潜力,两个模型的得分都在计算的及格分以上。这凸显了先进的人工智能模型在医学教育中的前景。然而,特定专业的不一致性表明,人工智能工具应该补充而不是取代传统的学习方法。
{"title":"Performance of GPT-4o and o1-Pro on United Kingdom Medical Licensing Assessment-style items: a comparative study.","authors":"Behrad Vakili, Aadam Ahmad, Mahsa Zolfaghari","doi":"10.3352/jeehp.2025.22.30","DOIUrl":"10.3352/jeehp.2025.22.30","url":null,"abstract":"<p><strong>Purpose: </strong>Large language models (LLMs) such as ChatGPT, and their potential to support autonomous learning for licensing exams like the UK Medical Licensing Assessment (UKMLA), are of growing interest. However, empirical evaluations of artificial intelligence (AI) performance against the UKMLA standard remain limited.</p><p><strong>Methods: </strong>We evaluated the performance of 2 recent ChatGPT versions, GPT-4o and o1-Pro, on a curated set of 374 UKMLA-style single-best-answer items spanning diverse medical specialties. Statistical comparisons using McNemar's test assessed the significance of differences between the 2 models. Specialties were analyzed to identify domain-specific variation. In addition, 20 image-based items were evaluated.</p><p><strong>Results: </strong>GPT-4o achieved an accuracy of 88.8%, while o1-Pro achieved 93.0%. McNemar's test revealed a statistically significant difference in favor of o1-Pro. Across specialties, both models demonstrated excellent performance in surgery, psychiatry, and infectious diseases. Notable differences arose in dermatology, respiratory medicine, and imaging, where o1-Pro consistently outperformed GPT-4o. Nevertheless, isolated weaknesses in general practice were observed. The analysis of image-based items showed 75% accuracy for GPT-4o and 90% for o1-Pro (P=0.25).</p><p><strong>Conclusion: </strong>ChatGPT shows strong potential as an adjunct learning tool for UKMLA preparation, with both models achieving scores above the calculated pass mark. This underscores the promise of advanced AI models in medical education. However, specialty-specific inconsistencies suggest AI tools should complement, rather than replace, traditional study methods.</p>","PeriodicalId":46098,"journal":{"name":"Journal of Educational Evaluation for Health Professions","volume":"22 ","pages":"30"},"PeriodicalIF":3.7,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12688319/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145259626","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Comparing generative artificial intelligence platforms and nursing student performance on a women's health nursing examination in Korea: a Rasch model approach. 比较生成人工智能平台和护理学生在韩国女性健康护理考试中的表现:Rasch模型方法。
IF 3.7 Q1 EDUCATION, SCIENTIFIC DISCIPLINES Pub Date : 2025-01-01 Epub Date: 2025-09-05 DOI: 10.3352/jeehp.2025.22.23
Eun Jeong Ko, Tae Kyung Lee, Geum Hee Jeong

Purpose: This psychometric study aimed to compare the ability parameter estimates of generative artificial intelligence (AI) platforms with those of nursing students on a 50-item women's health nursing examination at Hallym University, Korea, using the Rasch model. It also sought to estimate item difficulty parameters and evaluate AI performance across varying difficulty levels.

Methods: The exam, consisting of 39 multiple-choice items and 11 true/false items, was administered to 111 fourth-year nursing students in June 2023. In December 2024, 6 generative AI platforms (GPT-4o, ChatGPT Free, Claude.ai, Clova X, Mistral.ai, Google Gemini) completed the same items. The responses were analyzed using the Rasch model to estimate the ability and difficulty parameters. Unidimensionality was verified by the Dimensionality Evaluation to Enumerate Contributing Traits (DETECT), and analyses were conducted using the R packages irtQ and TAM.

Results: The items satisfied unidimensionality (DETECT=-0.16). Item difficulty parameter estimates ranged from -3.87 to 1.96 logits (mean=-0.61), with a mean difficulty index of 0.79. Examinees' ability parameter estimates ranged from -0.71 to 3.14 logits (mean=1.17). GPT-4o, ChatGPT Free, and Claude.ai outperformed the median student ability (1.09 logits), scoring 2.68, 2.34, and 2.34, respectively, while Clova X, Mistral.ai, and Google Gemini exhibited lower scores (0.20, -0.12, 0.80). The test information curve peaked below θ=0, indicating suitability for examinees with low to average ability.

Conclusion: Advanced generative AI platforms approximated the performance of high-performing students, but outcomes varied. The Rasch model effectively evaluated AI competency, supporting its potential utility for future AI performance assessments in nursing education.

目的:本心理测量学研究旨在比较生成式人工智能(AI)平台与韩国翰林大学护理专业学生在50项女性健康护理考试中的能力参数估计,使用Rasch模型。它还试图估算道具难度参数并评估AI在不同难度级别中的表现。方法:于2023年6月对111名护理专业四年级学生进行问卷调查,包括39项选择题和11项真假题。2024年12月,6个生成式AI平台(gpt - 40、ChatGPT Free、Claude。ai, Clova X, Mistral。ai, b谷歌Gemini)完成了相同的项目。使用Rasch模型对响应进行分析,以估计能力和难度参数。通过维数评估来枚举贡献性状(DETECT)验证单维性,并使用R包irtQ和TAM进行分析。结果:项目满足单维性(DETECT=-0.16)。道具难度参数估计范围从-3.87到1.96 logits(平均=-0.61),平均难度指数为0.79。考生的能力参数估计值范围为-0.71 ~ 3.14 logits(平均=1.17)。gpt - 40、ChatGPT Free和Claude。ai的表现优于学生能力中位数(1.09 logits),得分分别为2.68、2.34和2.34,而Clova X、Mistral。双子座的得分较低(0.20,-0.12,0.80)。测试信息曲线在θ=0以下达到峰值,表明适合低到中等水平的考生。结论:先进的生成式人工智能平台近似于优秀学生的表现,但结果有所不同。Rasch模型有效地评估了人工智能的能力,支持其在护理教育中未来人工智能绩效评估的潜在效用。
{"title":"Comparing generative artificial intelligence platforms and nursing student performance on a women's health nursing examination in Korea: a Rasch model approach.","authors":"Eun Jeong Ko, Tae Kyung Lee, Geum Hee Jeong","doi":"10.3352/jeehp.2025.22.23","DOIUrl":"10.3352/jeehp.2025.22.23","url":null,"abstract":"<p><strong>Purpose: </strong>This psychometric study aimed to compare the ability parameter estimates of generative artificial intelligence (AI) platforms with those of nursing students on a 50-item women's health nursing examination at Hallym University, Korea, using the Rasch model. It also sought to estimate item difficulty parameters and evaluate AI performance across varying difficulty levels.</p><p><strong>Methods: </strong>The exam, consisting of 39 multiple-choice items and 11 true/false items, was administered to 111 fourth-year nursing students in June 2023. In December 2024, 6 generative AI platforms (GPT-4o, ChatGPT Free, Claude.ai, Clova X, Mistral.ai, Google Gemini) completed the same items. The responses were analyzed using the Rasch model to estimate the ability and difficulty parameters. Unidimensionality was verified by the Dimensionality Evaluation to Enumerate Contributing Traits (DETECT), and analyses were conducted using the R packages irtQ and TAM.</p><p><strong>Results: </strong>The items satisfied unidimensionality (DETECT=-0.16). Item difficulty parameter estimates ranged from -3.87 to 1.96 logits (mean=-0.61), with a mean difficulty index of 0.79. Examinees' ability parameter estimates ranged from -0.71 to 3.14 logits (mean=1.17). GPT-4o, ChatGPT Free, and Claude.ai outperformed the median student ability (1.09 logits), scoring 2.68, 2.34, and 2.34, respectively, while Clova X, Mistral.ai, and Google Gemini exhibited lower scores (0.20, -0.12, 0.80). The test information curve peaked below θ=0, indicating suitability for examinees with low to average ability.</p><p><strong>Conclusion: </strong>Advanced generative AI platforms approximated the performance of high-performing students, but outcomes varied. The Rasch model effectively evaluated AI competency, supporting its potential utility for future AI performance assessments in nursing education.</p>","PeriodicalId":46098,"journal":{"name":"Journal of Educational Evaluation for Health Professions","volume":"22 ","pages":"23"},"PeriodicalIF":3.7,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12770907/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145151345","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Performance of large language models in medical licensing examinations: a systematic review and meta-analysis. 大型语言模型在医师执照考试中的表现:系统回顾和荟萃分析。
IF 3.7 Q1 EDUCATION, SCIENTIFIC DISCIPLINES Pub Date : 2025-01-01 Epub Date: 2025-11-18 DOI: 10.3352/jeehp.2025.22.36
Haniyeh Nouri, Abdollah Mahdavi, Ali Abedi, Alireza Mohammadnia, Mahnaz Hamedan, Masoud Amanzadeh

Purpose: This study systematically evaluates and compares the performance of large language models (LLMs) in answering medical licensing examination questions. By conducting subgroup analyses based on language, question format, and model type, this meta-analysis aims to provide a comprehensive overview of LLM capabilities in medical education and clinical decision-making.

Methods: This systematic review, registered in PROSPERO and following PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) guidelines, searched MEDLINE (PubMed), Scopus, and Web of Science for relevant articles published up to February 1, 2025. The search strategy included Medical Subject Headings (MeSH) terms and keywords related to ("ChatGPT" OR "GPT" OR "LLM variants") AND ("medical licensing exam*" OR "medical exam*" OR "medical education" OR "radiology exam*"). Eligible studies evaluated LLM accuracy on medical licensing examination questions. Pooled accuracy was estimated using a random-effects model, with subgroup analyses by LLM type, language, and question format. Publication bias was assessed using Egger's regression test.

Results: This systematic review identified 2,404 studies. After removing duplicates and excluding irrelevant articles through title and abstract screening, 36 studies were included after full-text review. The pooled accuracy was 72% (95% confidence interval, 70.0% to 75.0%) with high heterogeneity (I2=99%, P<0.001). Among LLMs, GPT-4 achieved the highest accuracy (81%), followed by Bing (79%), Claude (74%), Gemini/Bard (70%), and GPT-3.5 (60%) (P=0.001). Performance differences across languages (range, 62% in Polish to 77% in German) were not statistically significant (P=0.170).

Conclusion: LLMs, particularly GPT-4, can match or exceed medical students' examination performance and may serve as supportive educational tools. However, due to variability and the risk of errors, they should be used cautiously as complements rather than replacements for traditional learning methods.

目的:本研究系统评价和比较大语言模型(llm)在回答医师执照考试问题中的表现。通过基于语言、问题格式和模型类型的分组分析,本荟萃分析旨在全面概述法学硕士在医学教育和临床决策方面的能力。方法:本系统综述在PROSPERO注册,并遵循PRISMA(系统综述和荟萃分析的首选报告项目)指南,检索MEDLINE (PubMed)、Scopus和Web of Science,查找截至2025年2月1日发表的相关文章。搜索策略包括与(“ChatGPT”或“GPT”或“LLM变体”)和(“医学执照考试*”或“医学考试*”或“医学教育”或“放射学考试*”)相关的医学主题标题(MeSH)术语和关键字。合格的研究评估了LLM在医疗执照考试问题上的准确性。使用随机效应模型估计汇总准确性,并根据LLM类型、语言和问题格式进行亚组分析。采用Egger回归检验评估发表偏倚。结果:本系统综述确定了2404项研究。在通过标题和摘要筛选去除重复和排除不相关的文章后,全文审查后纳入了36项研究。结论:法学硕士,特别是GPT-4,可以达到或超过医学生的考试成绩,可以作为辅助教育工具。然而,由于可变性和错误的风险,它们应该谨慎地作为传统学习方法的补充而不是替代。
{"title":"Performance of large language models in medical licensing examinations: a systematic review and meta-analysis.","authors":"Haniyeh Nouri, Abdollah Mahdavi, Ali Abedi, Alireza Mohammadnia, Mahnaz Hamedan, Masoud Amanzadeh","doi":"10.3352/jeehp.2025.22.36","DOIUrl":"10.3352/jeehp.2025.22.36","url":null,"abstract":"<p><strong>Purpose: </strong>This study systematically evaluates and compares the performance of large language models (LLMs) in answering medical licensing examination questions. By conducting subgroup analyses based on language, question format, and model type, this meta-analysis aims to provide a comprehensive overview of LLM capabilities in medical education and clinical decision-making.</p><p><strong>Methods: </strong>This systematic review, registered in PROSPERO and following PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) guidelines, searched MEDLINE (PubMed), Scopus, and Web of Science for relevant articles published up to February 1, 2025. The search strategy included Medical Subject Headings (MeSH) terms and keywords related to (\"ChatGPT\" OR \"GPT\" OR \"LLM variants\") AND (\"medical licensing exam*\" OR \"medical exam*\" OR \"medical education\" OR \"radiology exam*\"). Eligible studies evaluated LLM accuracy on medical licensing examination questions. Pooled accuracy was estimated using a random-effects model, with subgroup analyses by LLM type, language, and question format. Publication bias was assessed using Egger's regression test.</p><p><strong>Results: </strong>This systematic review identified 2,404 studies. After removing duplicates and excluding irrelevant articles through title and abstract screening, 36 studies were included after full-text review. The pooled accuracy was 72% (95% confidence interval, 70.0% to 75.0%) with high heterogeneity (I2=99%, P<0.001). Among LLMs, GPT-4 achieved the highest accuracy (81%), followed by Bing (79%), Claude (74%), Gemini/Bard (70%), and GPT-3.5 (60%) (P=0.001). Performance differences across languages (range, 62% in Polish to 77% in German) were not statistically significant (P=0.170).</p><p><strong>Conclusion: </strong>LLMs, particularly GPT-4, can match or exceed medical students' examination performance and may serve as supportive educational tools. However, due to variability and the risk of errors, they should be used cautiously as complements rather than replacements for traditional learning methods.</p>","PeriodicalId":46098,"journal":{"name":"Journal of Educational Evaluation for Health Professions","volume":"22 ","pages":"36"},"PeriodicalIF":3.7,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12976628/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145542995","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Validity of the formative physical therapy Student and Clinical Instructor Performance Instrument in the United States: a quasi-experimental, time-series study. 形成性物理治疗学生和临床指导员表现工具在美国的有效性:一项准实验,时间序列研究。
IF 3.7 Q1 EDUCATION, SCIENTIFIC DISCIPLINES Pub Date : 2025-01-01 Epub Date: 2025-09-26 DOI: 10.3352/jeehp.2025.22.26
Sean Gallivan, Jamie Bayliss

Purpose: The aim of this study was to assess the validity of the Student and Clinical Instructor Performance Instrument (SCIPAI), a novel formative tool used in physical therapist education to assess student and clinical instructor (CI) performance throughout clinical education experiences (CEEs). The researchers hypothesized that the SCIPAI would demonstrate concurrent, predictive, and construct validity while offering additional contemporary validity evidence.

Methods: This quasi-experimental, time-series study had 811 student-CI pairs complete 2 SCIPAIs before after CEE midpoint, and an endpoint Clinical Performance Instrument (CPI) during beginning to terminal CEEs in a 1-year period. Spearman rank correlation analyses used final SCIPAI and CPI like-item scores to assess concurrent validity; and earlier SCIPAI and final CPI like-item scores to assess predictive validity. Construct validity was assessed via progression of student and CI performance scores within CEEs using Wilcoxon signed-rank testing. No randomization/grouping of subjects occurred.

Results: Moderate correlation existed between like final SCIPAI and CPI items (P<0.005) and between some like items of earlier SCIPAIs and final CPIs (P<0.005). Student performance scores demonstrated progress from SCIPAIs 1 to 4 within CEEs (P<0.005). While a greater number of CIs demonstrated progression rather than regression in performance from SCIPAI 1 to SCIPAI 4, the greater magnitude of decreases in CI performance contributed to an aggregate ratings decrease of CI performance (P<0.005).

Conclusion: The SCIPAI demonstrates concurrent, predictive, and construct validity when used by students and CIs to rate student performance at regular points throughout clinical education experiences.

目的:本研究的目的是评估学生和临床教师表现量表(SCIPAI)的有效性,SCIPAI是一种用于物理治疗师教育的新型形成工具,用于评估学生和临床教师(CI)在临床教育经历(cee)中的表现。研究人员假设SCIPAI在提供额外的当代效度证据的同时,会表现出并发效度、预测性效度和建构效度。方法:这项准实验的时间序列研究有811对学生- ci对,在CEE中点之前完成了2次SCIPAIs,并在CEE开始到结束的1年期间完成了终点临床表现仪(CPI)。Spearman秩相关分析采用最终SCIPAI和CPI相似项目得分评估并发效度;以及早期SCIPAI和最终CPI类项目得分来评估预测有效性。结构效度通过学生的进步和CI表现分数在cee中使用Wilcoxon符号秩检验来评估。未对受试者进行随机分组。结果:最终SCIPAI与CPI项目之间存在中等程度的相关性(p)。结论:SCIPAI在临床教育过程中被学生和ci用于评估学生在常规点的表现时具有并发效度、预测效度和结构效度。
{"title":"Validity of the formative physical therapy Student and Clinical Instructor Performance Instrument in the United States: a quasi-experimental, time-series study.","authors":"Sean Gallivan, Jamie Bayliss","doi":"10.3352/jeehp.2025.22.26","DOIUrl":"10.3352/jeehp.2025.22.26","url":null,"abstract":"<p><strong>Purpose: </strong>The aim of this study was to assess the validity of the Student and Clinical Instructor Performance Instrument (SCIPAI), a novel formative tool used in physical therapist education to assess student and clinical instructor (CI) performance throughout clinical education experiences (CEEs). The researchers hypothesized that the SCIPAI would demonstrate concurrent, predictive, and construct validity while offering additional contemporary validity evidence.</p><p><strong>Methods: </strong>This quasi-experimental, time-series study had 811 student-CI pairs complete 2 SCIPAIs before after CEE midpoint, and an endpoint Clinical Performance Instrument (CPI) during beginning to terminal CEEs in a 1-year period. Spearman rank correlation analyses used final SCIPAI and CPI like-item scores to assess concurrent validity; and earlier SCIPAI and final CPI like-item scores to assess predictive validity. Construct validity was assessed via progression of student and CI performance scores within CEEs using Wilcoxon signed-rank testing. No randomization/grouping of subjects occurred.</p><p><strong>Results: </strong>Moderate correlation existed between like final SCIPAI and CPI items (P<0.005) and between some like items of earlier SCIPAIs and final CPIs (P<0.005). Student performance scores demonstrated progress from SCIPAIs 1 to 4 within CEEs (P<0.005). While a greater number of CIs demonstrated progression rather than regression in performance from SCIPAI 1 to SCIPAI 4, the greater magnitude of decreases in CI performance contributed to an aggregate ratings decrease of CI performance (P<0.005).</p><p><strong>Conclusion: </strong>The SCIPAI demonstrates concurrent, predictive, and construct validity when used by students and CIs to rate student performance at regular points throughout clinical education experiences.</p>","PeriodicalId":46098,"journal":{"name":"Journal of Educational Evaluation for Health Professions","volume":"22 ","pages":"26"},"PeriodicalIF":3.7,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12688320/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145150958","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Longitudinal relationships between Korean medical students' academic performance in medical knowledge and clinical performance examinations: a retrospective longitudinal study. 韩国医学生医学知识学习成绩与临床表现考核的纵向关系:回顾性纵向研究。
IF 3.7 Q1 EDUCATION, SCIENTIFIC DISCIPLINES Pub Date : 2025-01-01 Epub Date: 2025-06-10 DOI: 10.3352/jeehp.2025.22.18
Yulim Kang, Hae Won Kim

Purpose: This study investigated the longitudinal relationships between performance on 3 examinations assessing medical knowledge and clinical skills among Korean medical students in the clinical phase. This study addressed the stability of each examination score and the interrelationships among examinations over time.

Methods: A retrospective longitudinal study was conducted at Yonsei University College of Medicine in Korea with a cohort of 112 medical students over 2 years. The students were in their third year in 2022 and progressed to the fourth year in 2023. We obtained comprehensive clinical science examination (CCSE) and progress test (PT) scores 3 times (T1-T3), and clinical performance examination (CPX) scores twice (T1 and T2). Autoregressive cross-lagged models were fitted to analyze their relationships.

Results: For each of the 3 examinations, the score at 1 time point predicted the subsequent score. Regarding cross-lagged effects, the CCSE at T1 predicted PT at T2 (β=0.472, P<0.001) and CCSE at T2 predicted PT at T3 (β=0.527, P<0.001). The CPX at T1 predicted the CCSE at T2 (β=0.163, P=0.006), and the CPX at T2 predicted the CCSE at T3 (β=0.154, P=0.006). The PT at T1 predicted the CPX at T2 (β=0.273, P=0.006).

Conclusion: The study identified each examination's stability and the complexity of the longitudinal relationships between them. These findings may help predict medical students' performance on subsequent examinations, potentially informing the provision of necessary student support.

目的:探讨韩国医学生临床阶段医学知识与临床技能三项考试成绩的纵向关系。本研究探讨了各考试成绩的稳定性以及各考试之间随时间的相互关系。方法:在韩国延世大学医学院对112名医学生进行了为期2年的回顾性纵向研究。这些学生于2022年进入三年级,并于2023年进入四年级。临床综合科学检查(CCSE)和进展试验(PT)评分3次(T1- t3),临床表现检查(CPX)评分2次(T1和T2)。拟合自回归交叉滞后模型来分析它们之间的关系。结果:3次考试中,每一次考试的1个时间点得分预测后续考试的得分。关于交叉滞后效应,T1时的CCSE预测T2时的PT (β=0.472, p)。结论:研究确定了各项检查的稳定性和它们之间纵向关系的复杂性。这些发现可能有助于预测医学生在随后的考试中的表现,潜在地为提供必要的学生支持提供信息。
{"title":"Longitudinal relationships between Korean medical students' academic performance in medical knowledge and clinical performance examinations: a retrospective longitudinal study.","authors":"Yulim Kang, Hae Won Kim","doi":"10.3352/jeehp.2025.22.18","DOIUrl":"10.3352/jeehp.2025.22.18","url":null,"abstract":"<p><strong>Purpose: </strong>This study investigated the longitudinal relationships between performance on 3 examinations assessing medical knowledge and clinical skills among Korean medical students in the clinical phase. This study addressed the stability of each examination score and the interrelationships among examinations over time.</p><p><strong>Methods: </strong>A retrospective longitudinal study was conducted at Yonsei University College of Medicine in Korea with a cohort of 112 medical students over 2 years. The students were in their third year in 2022 and progressed to the fourth year in 2023. We obtained comprehensive clinical science examination (CCSE) and progress test (PT) scores 3 times (T1-T3), and clinical performance examination (CPX) scores twice (T1 and T2). Autoregressive cross-lagged models were fitted to analyze their relationships.</p><p><strong>Results: </strong>For each of the 3 examinations, the score at 1 time point predicted the subsequent score. Regarding cross-lagged effects, the CCSE at T1 predicted PT at T2 (β=0.472, P<0.001) and CCSE at T2 predicted PT at T3 (β=0.527, P<0.001). The CPX at T1 predicted the CCSE at T2 (β=0.163, P=0.006), and the CPX at T2 predicted the CCSE at T3 (β=0.154, P=0.006). The PT at T1 predicted the CPX at T2 (β=0.273, P=0.006).</p><p><strong>Conclusion: </strong>The study identified each examination's stability and the complexity of the longitudinal relationships between them. These findings may help predict medical students' performance on subsequent examinations, potentially informing the provision of necessary student support.</p>","PeriodicalId":46098,"journal":{"name":"Journal of Educational Evaluation for Health Professions","volume":"22 ","pages":"18"},"PeriodicalIF":3.7,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12365683/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144267588","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Empirical effect of the Dr LEE Jong-wook Fellowship Program to empower sustainable change for the health workforce in Tanzania: a mixed-methods study 李钟郁博士奖学金计划对增强坦桑尼亚卫生人力可持续变革的经验效应:一项混合方法研究。
IF 9.3 Q1 EDUCATION, SCIENTIFIC DISCIPLINES Pub Date : 2025-01-01 Epub Date: 2025-01-20 DOI: 10.3352/jeehp.2025.22.6
Masoud Dauda, Swabaha Aidarus Yusuph, Harouni Yasini, Issa Mmbaga, Perpetua Mwambinngu, Hansol Park, Gyeongbae Seo, Kyoung Kyun Oh

Purpose: This study evaluated the Dr LEE Jong-wook Fellowship Program’s impact on Tanzania’s health workforce, focusing on relevance, effectiveness, efficiency, impact, and sustainability in addressing healthcare gaps.

Methods: A mixed-methods research design was employed. Data were collected from 97 out of 140 alumni through an online survey, 35 in-depth interviews, and one focus group discussion. The study was conducted from November to December 2023 and included alumni from 2009 to 2022. Measurement instruments included structured questionnaires for quantitative data and semi-structured guides for qualitative data. Quantitative analysis involved descriptive and inferential statistics (Spearman’s rank correlation, non-parametric tests) using Python ver. 3.11.0 and Stata ver. 14.0. Thematic analysis was employed to analyze qualitative data using NVivo ver. 12.0.

Results: Findings indicated high relevance (mean=91.6, standard deviation [SD]=8.6), effectiveness (mean=86.1, SD=11.2), efficiency (mean=82.7, SD=10.2), and impact (mean=87.7, SD=9.9), with improved skills, confidence, and institutional service quality. However, sustainability had a lower score (mean=58.0, SD=11.1), reflecting challenges in follow-up support and resource allocation. Effectiveness strongly correlated with impact (ρ=0.746, P<0.001). The qualitative findings revealed that participants valued tailored training but highlighted barriers, such as language challenges and insufficient practical components. Alumni-led initiatives contributed to knowledge sharing, but limited resources constrained sustainability.

Conclusion: The Fellowship Program enhanced Tanzania’s health workforce capacity, but it requires localized curricula and strengthened alumni networks for sustainability. These findings provide actionable insights for improving similar programs globally, confirming the hypothesis that tailored training positively influences workforce and institutional outcomes.

目的:本研究评估了Lee Jong-wook博士奖学金计划对坦桑尼亚卫生人力的影响,重点关注解决卫生保健差距的相关性、有效性、效率、影响和可持续性。方法:采用混合方法研究设计。通过在线调查、35次深度访谈和一次焦点小组讨论,从140名校友中收集了97名校友的数据。该研究于2023年11月至12月进行,包括2009年至2022年的校友。测量工具包括用于定量数据的结构化问卷和用于定性数据的半结构化指南。定量分析涉及使用Python ver的描述性和推断性统计(Spearman等级相关,非参数检验)。3.11.0和Stata版本。14.0. 采用NVivo ver对定性数据进行专题分析。12.0.结果:研究结果显示相关性(均值=91.6,标准差[SD]=8.6)、有效性(均值=86.1,SD=11.2)、效率(均值=82.7,SD=10.2)和影响(均值=87.7,SD=9.9)较高,技能、信心和机构服务质量均有所提高。然而,可持续性得分较低(平均值=58.0,SD=11.1),反映了后续支持和资源分配方面的挑战。结论:奖学金项目提高了坦桑尼亚卫生人力的能力,但它需要本地化的课程和加强校友网络,以实现可持续性。这些发现为改善全球类似项目提供了可行的见解,证实了定制培训对劳动力和机构结果产生积极影响的假设。
{"title":"Empirical effect of the Dr LEE Jong-wook Fellowship Program to empower sustainable change for the health workforce in Tanzania: a mixed-methods study","authors":"Masoud Dauda, Swabaha Aidarus Yusuph, Harouni Yasini, Issa Mmbaga, Perpetua Mwambinngu, Hansol Park, Gyeongbae Seo, Kyoung Kyun Oh","doi":"10.3352/jeehp.2025.22.6","DOIUrl":"10.3352/jeehp.2025.22.6","url":null,"abstract":"<p><strong>Purpose: </strong>This study evaluated the Dr LEE Jong-wook Fellowship Program’s impact on Tanzania’s health workforce, focusing on relevance, effectiveness, efficiency, impact, and sustainability in addressing healthcare gaps.</p><p><strong>Methods: </strong>A mixed-methods research design was employed. Data were collected from 97 out of 140 alumni through an online survey, 35 in-depth interviews, and one focus group discussion. The study was conducted from November to December 2023 and included alumni from 2009 to 2022. Measurement instruments included structured questionnaires for quantitative data and semi-structured guides for qualitative data. Quantitative analysis involved descriptive and inferential statistics (Spearman’s rank correlation, non-parametric tests) using Python ver. 3.11.0 and Stata ver. 14.0. Thematic analysis was employed to analyze qualitative data using NVivo ver. 12.0.</p><p><strong>Results: </strong>Findings indicated high relevance (mean=91.6, standard deviation [SD]=8.6), effectiveness (mean=86.1, SD=11.2), efficiency (mean=82.7, SD=10.2), and impact (mean=87.7, SD=9.9), with improved skills, confidence, and institutional service quality. However, sustainability had a lower score (mean=58.0, SD=11.1), reflecting challenges in follow-up support and resource allocation. Effectiveness strongly correlated with impact (ρ=0.746, P<0.001). The qualitative findings revealed that participants valued tailored training but highlighted barriers, such as language challenges and insufficient practical components. Alumni-led initiatives contributed to knowledge sharing, but limited resources constrained sustainability.</p><p><strong>Conclusion: </strong>The Fellowship Program enhanced Tanzania’s health workforce capacity, but it requires localized curricula and strengthened alumni networks for sustainability. These findings provide actionable insights for improving similar programs globally, confirming the hypothesis that tailored training positively influences workforce and institutional outcomes.</p>","PeriodicalId":46098,"journal":{"name":"Journal of Educational Evaluation for Health Professions","volume":"22 ","pages":"6"},"PeriodicalIF":9.3,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12003955/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143013022","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Empathy and tolerance of ambiguity in medical students and doctors participating in art-based observational training at the Rijksmuseum in Amsterdam, the Netherlands: a before-and-after study 在荷兰阿姆斯特丹国立博物馆参加以艺术为基础的观察训练的医学生和医生对歧义的移情和容忍:一项前后研究。
IF 9.3 Q1 EDUCATION, SCIENTIFIC DISCIPLINES Pub Date : 2025-01-01 Epub Date: 2025-01-14 DOI: 10.3352/jeehp.2025.22.3
Stella Anna Bult, Thomas van Gulik

Purpose: This research presents an experimental study using validated questionnaires to quantitatively assess the outcomes of art-based observational training in medical students, residents, and specialists. The study tested the hypothesis that art-based observational training would lead to measurable effects on judgement skills (tolerance of ambiguity) and empathy in medical students and doctors.

Methods: An experimental cohort study with pre- and post-intervention assessments was conducted using validated questionnaires and qualitative evaluation forms to examine the outcomes of art-based observational training in medical students and doctors. Between December 2023 and June 2024, 15 art courses were conducted in the Rijksmuseum in Amsterdam. Participants were assessed on empathy using the Jefferson Scale of Empathy (JSE) and tolerance of ambiguity using the Tolerance of Ambiguity in Medical Students and Doctors (TAMSAD) scale.

Results: In total, 91 participants were included; 29 participants completed the JSE and 62 completed the TAMSAD scales. The results showed statistically significant post-test increases for mean JSE and TAMSAD scores (3.71 points for the JSE, ranging from 20 to 140, and 1.86 points for the TAMSAD, ranging from 0 to 100). The qualitative findings were predominantly positive.

Conclusion: The results suggest that incorporating art-based observational training in medical education improves empathy and tolerance of ambiguity. This study highlights the importance of art-based observational training in medical education in the professional development of medical students and doctors.

目的:本研究提出了一项实验研究,使用有效的问卷来定量评估医学生、住院医师和专科医生基于艺术的观察训练的结果。该研究测试了基于艺术的观察训练对医学生和医生的判断技能(模糊容忍度)和同理心的可测量影响的假设。方法:采用实验队列研究,采用有效问卷和定性评估表对医学生和医生进行艺术观察训练的效果进行干预前和干预后评估。从2023年12月到2024年6月,在阿姆斯特丹国立博物馆举办了15门艺术课程。使用杰弗逊共情量表(JSE)评估被试的共情能力,使用医学生和医生的歧义容忍度量表(TAMSAD)评估被试的歧义容忍度。结果:共纳入91名受试者;29名参与者完成了JSE, 62名完成了TAMSAD量表。结果显示,测试后JSE和TAMSAD的平均得分均有统计学意义上的提高(JSE为3.71分,范围从20到140,TAMSAD为1.86分,范围从0到100)。定性结果主要是积极的。结论:在医学教育中引入以艺术为基础的观察训练可以提高学生的共情能力和对歧义的容忍度。本研究强调了医学教育中以艺术为基础的观察训练在医学生和医生专业发展中的重要性。
{"title":"Empathy and tolerance of ambiguity in medical students and doctors participating in art-based observational training at the Rijksmuseum in Amsterdam, the Netherlands: a before-and-after study","authors":"Stella Anna Bult, Thomas van Gulik","doi":"10.3352/jeehp.2025.22.3","DOIUrl":"10.3352/jeehp.2025.22.3","url":null,"abstract":"<p><strong>Purpose: </strong>This research presents an experimental study using validated questionnaires to quantitatively assess the outcomes of art-based observational training in medical students, residents, and specialists. The study tested the hypothesis that art-based observational training would lead to measurable effects on judgement skills (tolerance of ambiguity) and empathy in medical students and doctors.</p><p><strong>Methods: </strong>An experimental cohort study with pre- and post-intervention assessments was conducted using validated questionnaires and qualitative evaluation forms to examine the outcomes of art-based observational training in medical students and doctors. Between December 2023 and June 2024, 15 art courses were conducted in the Rijksmuseum in Amsterdam. Participants were assessed on empathy using the Jefferson Scale of Empathy (JSE) and tolerance of ambiguity using the Tolerance of Ambiguity in Medical Students and Doctors (TAMSAD) scale.</p><p><strong>Results: </strong>In total, 91 participants were included; 29 participants completed the JSE and 62 completed the TAMSAD scales. The results showed statistically significant post-test increases for mean JSE and TAMSAD scores (3.71 points for the JSE, ranging from 20 to 140, and 1.86 points for the TAMSAD, ranging from 0 to 100). The qualitative findings were predominantly positive.</p><p><strong>Conclusion: </strong>The results suggest that incorporating art-based observational training in medical education improves empathy and tolerance of ambiguity. This study highlights the importance of art-based observational training in medical education in the professional development of medical students and doctors.</p>","PeriodicalId":46098,"journal":{"name":"Journal of Educational Evaluation for Health Professions","volume":"22 ","pages":"3"},"PeriodicalIF":9.3,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11880821/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142980319","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Comparison between GPT-4 and human raters in grading pharmacy students' exam responses in Malaysia: a cross-sectional study. GPT-4和人类评分者在马来西亚对药学学生考试反应评分的比较:一项横断面研究。
IF 3.7 Q1 EDUCATION, SCIENTIFIC DISCIPLINES Pub Date : 2025-01-01 Epub Date: 2025-07-28 DOI: 10.3352/jeehp.2025.22.20
Wuan Shuen Yap, Pui San Saw, Li Ling Yeap, Shaun Wen Huey Lee, Wei Jin Wong, Ronald Fook Seng Lee

Purpose: Manual grading is time-consuming and prone to inconsistencies, prompting the exploration of generative artificial intelligence tools such as GPT-4 to enhance efficiency and reliability. This study investigated GPT-4's potential in grading pharmacy students' exam responses, focusing on the impact of optimized prompts. Specifically, it evaluated the alignment between GPT-4 and human raters, assessed GPT-4's consistency over time, and determined its error rates in grading pharmacy students' exam responses.

Methods: We conducted a comparative study using past exam responses graded by university-trained raters and by GPT-4. Responses were randomized before evaluation by GPT-4, accessed via a Plus account between April and September 2024. Prompt optimization was performed on 16 responses, followed by evaluation of 3 prompt delivery methods. We then applied the optimized approach across 4 item types. Intraclass correlation coefficients and error analyses were used to assess consistency and agreement between GPT-4 and human ratings.

Results: GPT-4's ratings aligned reasonably well with human raters, demonstrating moderate to excellent reliability (intraclass correlation coefficient=0.617-0.933), depending on item type and the optimized prompt. When stratified by grade bands, GPT-4 was less consistent in marking high-scoring responses (Z=-5.71-4.62, P<0.001). Overall, despite achieving substantial alignment with human raters in many cases, discrepancies across item types and a tendency to commit basic errors necessitate continued educator involvement to ensure grading accuracy.

Conclusion: With optimized prompts, GPT-4 shows promise as a supportive tool for grading pharmacy students' exam responses, particularly for objective tasks. However, its limitations-including errors and variability in grading high-scoring responses-require ongoing human oversight. Future research should explore advanced generative artificial intelligence models and broader assessment formats to further enhance grading reliability.

目的:人工评分耗时长,且容易出现不一致性,促使我们探索生成式人工智能工具,如GPT-4,以提高效率和可靠性。本研究探讨了GPT-4对药学学生考试成绩评分的潜力,重点关注优化提示的影响。具体来说,它评估了GPT-4与人类评分者之间的一致性,评估了GPT-4随时间的一致性,并确定了其在给药学学生考试反应评分时的错误率。方法:我们使用由大学训练的评分员和GPT-4评分的过去的考试答案进行了比较研究。在GPT-4评估之前,应答是随机的,并在2024年4月至9月期间通过Plus账户访问。对16份问卷进行即时优化,并对3种即时送达方式进行评价。然后,我们将优化的方法应用于4个项目类型。使用类内相关系数和误差分析来评估GPT-4和人类评分之间的一致性和一致性。结果:GPT-4的评分与人类评分者相当一致,表现出中等至优异的信度(类内相关系数=0.617-0.933),取决于项目类型和优化提示。当按年级等级分层时,GPT-4在评分高分回答时不太一致(Z=-5.71-4.62)。结论:通过优化提示,GPT-4有望成为评分药学学生考试回答的辅助工具,特别是对于客观任务。然而,它的局限性——包括评分高分反应的错误和可变性——需要持续的人工监督。未来的研究应探索先进的生成式人工智能模型和更广泛的评估格式,以进一步提高评分的可靠性。
{"title":"Comparison between GPT-4 and human raters in grading pharmacy students' exam responses in Malaysia: a cross-sectional study.","authors":"Wuan Shuen Yap, Pui San Saw, Li Ling Yeap, Shaun Wen Huey Lee, Wei Jin Wong, Ronald Fook Seng Lee","doi":"10.3352/jeehp.2025.22.20","DOIUrl":"https://doi.org/10.3352/jeehp.2025.22.20","url":null,"abstract":"<p><strong>Purpose: </strong>Manual grading is time-consuming and prone to inconsistencies, prompting the exploration of generative artificial intelligence tools such as GPT-4 to enhance efficiency and reliability. This study investigated GPT-4's potential in grading pharmacy students' exam responses, focusing on the impact of optimized prompts. Specifically, it evaluated the alignment between GPT-4 and human raters, assessed GPT-4's consistency over time, and determined its error rates in grading pharmacy students' exam responses.</p><p><strong>Methods: </strong>We conducted a comparative study using past exam responses graded by university-trained raters and by GPT-4. Responses were randomized before evaluation by GPT-4, accessed via a Plus account between April and September 2024. Prompt optimization was performed on 16 responses, followed by evaluation of 3 prompt delivery methods. We then applied the optimized approach across 4 item types. Intraclass correlation coefficients and error analyses were used to assess consistency and agreement between GPT-4 and human ratings.</p><p><strong>Results: </strong>GPT-4's ratings aligned reasonably well with human raters, demonstrating moderate to excellent reliability (intraclass correlation coefficient=0.617-0.933), depending on item type and the optimized prompt. When stratified by grade bands, GPT-4 was less consistent in marking high-scoring responses (Z=-5.71-4.62, P<0.001). Overall, despite achieving substantial alignment with human raters in many cases, discrepancies across item types and a tendency to commit basic errors necessitate continued educator involvement to ensure grading accuracy.</p><p><strong>Conclusion: </strong>With optimized prompts, GPT-4 shows promise as a supportive tool for grading pharmacy students' exam responses, particularly for objective tasks. However, its limitations-including errors and variability in grading high-scoring responses-require ongoing human oversight. Future research should explore advanced generative artificial intelligence models and broader assessment formats to further enhance grading reliability.</p>","PeriodicalId":46098,"journal":{"name":"Journal of Educational Evaluation for Health Professions","volume":"22 ","pages":"20"},"PeriodicalIF":3.7,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145151398","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Mixed reality versus manikins in basic life support simulation-based training for medical students in France: the mixed reality non-inferiority randomized controlled trial. 混合现实与人体模型在法国医学生基础生命支持模拟训练中的对比:混合现实非劣效性随机对照试验
IF 3.7 Q1 EDUCATION, SCIENTIFIC DISCIPLINES Pub Date : 2025-01-01 Epub Date: 2025-05-12 DOI: 10.3352/jeehp.2025.22.15
Sofia Barlocco De La Vega, Evelyne Guerif-Dubreucq, Jebrane Bouaoud, Myriam Awad, Léonard Mathon, Agathe Beauvais, Thomas Olivier, Pierre-Clément Thiébaud, Anne-Laure Philippon

Purpose: To compare the effectiveness of mixed reality with traditional manikin-based simulation in basic life support (BLS) training, testing the hypothesis that mixed reality is non-inferior to manikin-based simulation.

Methods: A non-inferiority randomized controlled trial was conducted. Third-year medical students were randomized into 2 groups. The mixed reality group received 32 minutes of individual training using a virtual reality headset and a torso for chest compressions (CC). The manikin group participated in 2 hours of group training consisting of theoretical and practical sessions using a low-fidelity manikin. The primary outcome was the overall BLS performance score, assessed at 1 month through a standardized BLS scenario using a 10-item assessment scale. The quality of CC, student satisfaction, and confidence levels were secondary outcomes and assessed through superiority analyses.

Results: Data from 155 participants were analyzed, with 84 in the mixed reality group and 71 in the manikin group. The mean overall BLS performance score was 6.4 (mixed reality) vs. 6.5 (manikin), (mean difference, -0.1; 95% confidence interval [CI], -0.45 to +∞). CC depth was greater in the manikin group (50.3 mm vs. 46.6 mm; mean difference, -3.7 mm; 95% CI, -6.5 to -0.9), with 61.2% achieving optimal depth compared to 43.8% in the mixed reality group (mean difference, 17.4%; 95% CI, -29.3 to -5.5). Satisfaction was higher in the mixed reality group (4.9/5 vs. 4.7/5 in the manikin group; difference, 0.2; 95% CI, 0.07 to 0.33), as was confidence in performing BLS (3.9/5 vs. 3.6/5; difference, 0.3; 95% CI, 0.11 to 0.58). No other significant differences were observed for secondary outcomes.

Conclusion: Mixed reality is non-inferior to manikin simulation in terms of overall BLS performance score assessed at 1 month.

目的:比较混合现实与传统基于人体模型的模拟在基本生命支持(BLS)训练中的有效性,验证混合现实不劣于基于人体模型的模拟的假设。方法:采用非劣效性随机对照试验。三年级医学生随机分为两组。混合现实组接受了32分钟的个人训练,使用虚拟现实耳机和躯干胸部按压(CC)。人体模型组使用低保真度的人体模型进行了2小时的理论和实践训练。主要结果是总体BLS表现得分,在1个月时通过标准化的BLS情景使用10项评估量表进行评估。CC的质量、学生满意度和信心水平是次要结果,并通过优势分析进行评估。结果:共分析了155名参与者的数据,其中混合现实组84人,人体模型组71人。BLS的平均总分为6.4分(混合现实)vs. 6.5分(人体模型),(平均差-0.1;95%置信区间[CI], -0.45至+∞)。假人组CC深度更大(50.3 mm vs 46.6 mm;平均差值-3.7 mm;95% CI, -6.5至-0.9),61.2%达到最佳深度,而混合现实组为43.8%(平均差为17.4%;95% CI, -29.3至-5.5)。混合现实组满意度更高(4.9/5 vs. 4.7/5);差异,0.2;95% CI, 0.07至0.33),执行BLS的信心也是如此(3.9/5 vs. 3.6/5;差异,0.3;95% CI, 0.11 ~ 0.58)。在次要结果方面没有观察到其他显著差异。结论:混合现实在1个月的综合BLS性能评分方面不低于人体模拟。
{"title":"Mixed reality versus manikins in basic life support simulation-based training for medical students in France: the mixed reality non-inferiority randomized controlled trial.","authors":"Sofia Barlocco De La Vega, Evelyne Guerif-Dubreucq, Jebrane Bouaoud, Myriam Awad, Léonard Mathon, Agathe Beauvais, Thomas Olivier, Pierre-Clément Thiébaud, Anne-Laure Philippon","doi":"10.3352/jeehp.2025.22.15","DOIUrl":"10.3352/jeehp.2025.22.15","url":null,"abstract":"<p><strong>Purpose: </strong>To compare the effectiveness of mixed reality with traditional manikin-based simulation in basic life support (BLS) training, testing the hypothesis that mixed reality is non-inferior to manikin-based simulation.</p><p><strong>Methods: </strong>A non-inferiority randomized controlled trial was conducted. Third-year medical students were randomized into 2 groups. The mixed reality group received 32 minutes of individual training using a virtual reality headset and a torso for chest compressions (CC). The manikin group participated in 2 hours of group training consisting of theoretical and practical sessions using a low-fidelity manikin. The primary outcome was the overall BLS performance score, assessed at 1 month through a standardized BLS scenario using a 10-item assessment scale. The quality of CC, student satisfaction, and confidence levels were secondary outcomes and assessed through superiority analyses.</p><p><strong>Results: </strong>Data from 155 participants were analyzed, with 84 in the mixed reality group and 71 in the manikin group. The mean overall BLS performance score was 6.4 (mixed reality) vs. 6.5 (manikin), (mean difference, -0.1; 95% confidence interval [CI], -0.45 to +∞). CC depth was greater in the manikin group (50.3 mm vs. 46.6 mm; mean difference, -3.7 mm; 95% CI, -6.5 to -0.9), with 61.2% achieving optimal depth compared to 43.8% in the mixed reality group (mean difference, 17.4%; 95% CI, -29.3 to -5.5). Satisfaction was higher in the mixed reality group (4.9/5 vs. 4.7/5 in the manikin group; difference, 0.2; 95% CI, 0.07 to 0.33), as was confidence in performing BLS (3.9/5 vs. 3.6/5; difference, 0.3; 95% CI, 0.11 to 0.58). No other significant differences were observed for secondary outcomes.</p><p><strong>Conclusion: </strong>Mixed reality is non-inferior to manikin simulation in terms of overall BLS performance score assessed at 1 month.</p>","PeriodicalId":46098,"journal":{"name":"Journal of Educational Evaluation for Health Professions","volume":"22 ","pages":"15"},"PeriodicalIF":3.7,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144040345","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The impact of differential item functioning on ability estimation using the Korean Medical Licensing Examination with computerized adaptive testing: a post-hoc simulation study. 不同项目功能对韩国医师执照考试计算机化自适应测试能力评估的影响:一项事后模拟研究
IF 3.7 Q1 EDUCATION, SCIENTIFIC DISCIPLINES Pub Date : 2025-01-01 Epub Date: 2025-10-10 DOI: 10.3352/jeehp.2025.22.31
Dogyeong Kim, Jeongwook Choi, Dong Gi Seo

Purpose: This study examined the impact of differential item functioning (DIF) on ability estimation in a computerized adaptive testing (CAT) environment using real response data from the 2017 Korean Medical Licensing Examination (KMLE). We hypothesized that excluding gender-based DIF items would improve estimation accuracy, particularly for examinees at the extremes of the ability scale.

Methods: The study was conducted in 2 steps: (1) DIF detection and (2) post-hoc simulation. The analysis used data from 3,259 examinees who completed all 360 dichotomous items. Gender-based DIF was detected with the residual-based DIF method (reference group: males; focal group: females). Two CAT conditions (all items vs. DIF-excluded) were compared against a "true θ" estimated from a fixed-form test of 264 non-DIF items. Accuracy was evaluated using bias, root mean square error (RMSE), and correlation with true θ.

Results: In the CAT condition excluding DIF items, accuracy improved, with RMSE reduced and correlation with true θ increased. However, bias was slightly larger in magnitude. Gender-specific analyses showed that DIF removal reduced the underestimation of female ability but increased the underestimation of male ability, yielding estimates that were fairer across genders. When DIF items were included, estimation errors were more pronounced at both low and high ability levels.

Conclusion: Managing DIF in CAT-based high-stakes examinations can enhance fairness and precision. Using real examinee data, this study provides practical evidence of the implications of DIF for CAT-based measurement and supports fairness-oriented test design.

目的:本研究利用2017年韩国医师执照考试(KMLE)的真实反应数据,研究了计算机化自适应测试(CAT)环境下差异项目功能(DIF)对能力估计的影响。我们假设排除基于性别的DIF项目将提高估计的准确性,特别是对于处于能力量表极端的考生。方法:研究分2步进行:(1)DIF检测和(2)事后模拟。这项分析使用了3259名考生的数据,他们完成了所有360个二分题。采用残差法检测基于性别的DIF(参照组:男性;焦点组:女性)。将两个CAT条件(所有项目与排除dif)与264个非dif项目的固定形式测试估计的“真θ”进行比较。使用偏倚、均方根误差(RMSE)和与真θ的相关性来评估准确性。结果:在排除DIF项的CAT条件下,准确度提高,RMSE降低,与真θ的相关性增加。然而,偏倚程度略大。性别分析表明,去除DIF减少了对女性能力的低估,但增加了对男性能力的低估,从而产生了跨性别更公平的估计。当包含DIF项目时,在低能力水平和高能力水平上的估计误差都更加明显。结论:在基于cat的高风险考试中对DIF进行管理可以提高公平和准确性。本研究使用真实的考生数据,为DIF对基于cat的测试的影响提供了实践证据,并为公平导向的测试设计提供了支持。
{"title":"The impact of differential item functioning on ability estimation using the Korean Medical Licensing Examination with computerized adaptive testing: a post-hoc simulation study.","authors":"Dogyeong Kim, Jeongwook Choi, Dong Gi Seo","doi":"10.3352/jeehp.2025.22.31","DOIUrl":"https://doi.org/10.3352/jeehp.2025.22.31","url":null,"abstract":"<p><strong>Purpose: </strong>This study examined the impact of differential item functioning (DIF) on ability estimation in a computerized adaptive testing (CAT) environment using real response data from the 2017 Korean Medical Licensing Examination (KMLE). We hypothesized that excluding gender-based DIF items would improve estimation accuracy, particularly for examinees at the extremes of the ability scale.</p><p><strong>Methods: </strong>The study was conducted in 2 steps: (1) DIF detection and (2) post-hoc simulation. The analysis used data from 3,259 examinees who completed all 360 dichotomous items. Gender-based DIF was detected with the residual-based DIF method (reference group: males; focal group: females). Two CAT conditions (all items vs. DIF-excluded) were compared against a \"true θ\" estimated from a fixed-form test of 264 non-DIF items. Accuracy was evaluated using bias, root mean square error (RMSE), and correlation with true θ.</p><p><strong>Results: </strong>In the CAT condition excluding DIF items, accuracy improved, with RMSE reduced and correlation with true θ increased. However, bias was slightly larger in magnitude. Gender-specific analyses showed that DIF removal reduced the underestimation of female ability but increased the underestimation of male ability, yielding estimates that were fairer across genders. When DIF items were included, estimation errors were more pronounced at both low and high ability levels.</p><p><strong>Conclusion: </strong>Managing DIF in CAT-based high-stakes examinations can enhance fairness and precision. Using real examinee data, this study provides practical evidence of the implications of DIF for CAT-based measurement and supports fairness-oriented test design.</p>","PeriodicalId":46098,"journal":{"name":"Journal of Educational Evaluation for Health Professions","volume":"22 ","pages":"31"},"PeriodicalIF":3.7,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146047194","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Journal of Educational Evaluation for Health Professions
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1