首页 > 最新文献

Journal of Educational Evaluation for Health Professions最新文献

英文 中文
Development and psychometric assessment of a scale for evaluating healthcare professionals' attitudes toward interprofessional education and collaboration in the United States: a cross-sectional study. 美国医疗保健专业人员对跨专业教育和合作态度的量表的开发和心理测量评估:一项横断面研究。
IF 3.7 Q1 EDUCATION, SCIENTIFIC DISCIPLINES Pub Date : 2025-01-01 Epub Date: 2025-10-20 DOI: 10.3352/jeehp.2025.22.32
Michael Christopher Banks, Ryan Brock Mutcheson, Maedot Ariaya Haymete, Serkan Toy

Purpose: Interprofessional education (IPE) is increasingly recognized as critical to preparing health professionals for collaborative practice, yet rigorous assessment remains limited by a lack of psychometrically sound instruments. Building on a previously developed questionnaire for physicians, this study aimed to expand the scale to include allied health professionals and to evaluate whether the factor structure remained consistent across professions. We hypothesized that a similar factor structure would emerge from the combined dataset, thereby supporting the scale's generalizability.

Methods: This observational study included 930 healthcare professionals in the United States (379 physicians, 419 nurses, 76 pharmacists, and others) who completed a 35-item questionnaire addressing IPE competency domains. Data were collected between December 2019 and May 2020. Exploratory factor analysis was employed to examine the factor structure, followed by item response theory (IRT) analyses to assess item fit, reliability, and validity. Raw data are available upon request.

Results: Factor analysis of 22 retained items confirmed a 5-factor solution: teamwork and communication, patient-centered care, roles and responsibilities, ethics and attitudes, and reflective practice, explaining 59% of the variance. Subscale reliabilities ranged from α=0.65 to 0.87. IRT analyses supported construct validity and measurement precision, while identifying areas for refinement in reflective practice.

Conclusion: This study demonstrates that the scale is reliable, valid, and generalizable across diverse health professions. It provides a robust tool for assessing attitudes toward IPE, offering value for curriculum evaluation, institutional benchmarking, and future longitudinal research on professional identity formation and collaborative practice.

目的:跨专业教育(IPE)越来越被认为是为卫生专业人员准备合作实践的关键,但严格的评估仍然受到缺乏心理测量学上健全工具的限制。在先前为医生开发的问卷调查的基础上,本研究旨在扩大规模,包括联合卫生专业人员,并评估因素结构是否在各专业之间保持一致。我们假设从合并的数据集中会出现类似的因素结构,从而支持量表的概括性。方法:本观察性研究包括930名美国医疗保健专业人员(379名医生,419名护士,76名药剂师等),他们完成了一份35项关于IPE能力领域的问卷。数据收集于2019年12月至2020年5月。采用探索性因子分析检验因子结构,然后采用项目反应理论(IRT)分析评估项目契合度、信度和效度。原始数据可应要求提供。结果:对22个保留项目进行因子分析,确定了5个因素的解决方案:团队合作与沟通、以患者为中心的护理、角色与责任、道德与态度、反思性实践,解释了59%的方差。分量表信度α=0.65 ~ 0.87。IRT分析支持结构有效性和测量精度,同时确定在反思实践中需要改进的领域。结论:本研究证明该量表在不同的卫生专业中是可靠的、有效的和可推广的。它为评估人们对国际政治经济学的态度提供了一个强有力的工具,为课程评估、机构基准制定以及未来对专业认同形成和合作实践的纵向研究提供了价值。
{"title":"Development and psychometric assessment of a scale for evaluating healthcare professionals' attitudes toward interprofessional education and collaboration in the United States: a cross-sectional study.","authors":"Michael Christopher Banks, Ryan Brock Mutcheson, Maedot Ariaya Haymete, Serkan Toy","doi":"10.3352/jeehp.2025.22.32","DOIUrl":"10.3352/jeehp.2025.22.32","url":null,"abstract":"<p><strong>Purpose: </strong>Interprofessional education (IPE) is increasingly recognized as critical to preparing health professionals for collaborative practice, yet rigorous assessment remains limited by a lack of psychometrically sound instruments. Building on a previously developed questionnaire for physicians, this study aimed to expand the scale to include allied health professionals and to evaluate whether the factor structure remained consistent across professions. We hypothesized that a similar factor structure would emerge from the combined dataset, thereby supporting the scale's generalizability.</p><p><strong>Methods: </strong>This observational study included 930 healthcare professionals in the United States (379 physicians, 419 nurses, 76 pharmacists, and others) who completed a 35-item questionnaire addressing IPE competency domains. Data were collected between December 2019 and May 2020. Exploratory factor analysis was employed to examine the factor structure, followed by item response theory (IRT) analyses to assess item fit, reliability, and validity. Raw data are available upon request.</p><p><strong>Results: </strong>Factor analysis of 22 retained items confirmed a 5-factor solution: teamwork and communication, patient-centered care, roles and responsibilities, ethics and attitudes, and reflective practice, explaining 59% of the variance. Subscale reliabilities ranged from α=0.65 to 0.87. IRT analyses supported construct validity and measurement precision, while identifying areas for refinement in reflective practice.</p><p><strong>Conclusion: </strong>This study demonstrates that the scale is reliable, valid, and generalizable across diverse health professions. It provides a robust tool for assessing attitudes toward IPE, offering value for curriculum evaluation, institutional benchmarking, and future longitudinal research on professional identity formation and collaborative practice.</p>","PeriodicalId":46098,"journal":{"name":"Journal of Educational Evaluation for Health Professions","volume":"22 ","pages":"32"},"PeriodicalIF":3.7,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12768546/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145330367","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Performance of ChatGPT-4 on the French Board of Plastic Reconstructive and Aesthetic Surgery written exam: a descriptive study. ChatGPT-4在法国整形重建与美容外科委员会笔试中的表现:描述性研究。
IF 3.7 Q1 EDUCATION, SCIENTIFIC DISCIPLINES Pub Date : 2025-01-01 Epub Date: 2025-09-30 DOI: 10.3352/jeehp.2025.22.27
Emma Dejean-Bouyer, Anoujat Kanlagna, François Thuau, Pierre Perrot, Ugo Lancien

Purpose: This study aims to evaluate the performance of Chat Generative Pre-Trained Transformer 4 (ChatGPT-4) on the French Board of Plastic, Reconstructive, and Aesthetic Surgery written examination and to assess its role as a supplementary resource in helping medical students prepare for the qualification examination in plastic surgery.

Methods: This descriptive study evaluated ChatGPT-4's performance on 213 items from the October 2024 French Board of Plastic, Reconstructive, and Aesthetic Surgery written examination. Responses were assessed for accuracy, logical reasoning, internal and external information use, and were categorized for fallacies by independent reviewers. Statistical analyses included chi-square tests and Fisher's exact test for significance.

Results: ChatGPT-4 answered all questions across the 10 modules, achieving an overall accuracy rate of 77.5%. The model applied logical reasoning in 98.1% of the questions, utilized internal information in 94.4%, and incorporated external information in 91.1%.

Conclusion: ChatGPT-4 performs satisfactorily on the French Board of Plastic, Reconstructive, and Aesthetic Surgery written examination. Its accuracy met the minimum passing standards for the exam. While responses generally align with expected knowledge, careful verification remains necessary, particularly for questions involving image interpretation. As artificial intelligence continues to evolve, ChatGPT-4 is expected to become an increasingly reliable tool for medical education. At present, it remains a valuable resource for assisting plastic surgery residents in their training.

目的:本研究旨在评估聊天生成预训练转换器4 (ChatGPT-4)在法国整形、重建和美容外科委员会笔试中的表现,并评估其作为辅助资源在帮助医学生准备整形外科资格考试中的作用。方法:本描述性研究评估ChatGPT-4在2024年10月法国整形、重建和美容外科委员会笔试中的213项中的表现。对回答的准确性、逻辑推理、内部和外部信息的使用进行了评估,并由独立审稿人对谬误进行了分类。统计分析包括卡方检验和Fisher显著性精确检验。结果:ChatGPT-4回答了10个模块的所有问题,总体准确率达到77.5%。98.1%的问题采用逻辑推理,94.4%的问题采用内部信息,91.1%的问题采用外部信息。结论:ChatGPT-4在法国整形、重建和美容外科委员会笔试中表现令人满意。它的准确性达到了考试的最低通过标准。虽然回答通常与预期知识一致,但仍然需要仔细核实,特别是涉及图像解释的问题。随着人工智能的不断发展,ChatGPT-4有望成为医学教育越来越可靠的工具。目前,它仍然是帮助整形外科住院医师进行培训的宝贵资源。
{"title":"Performance of ChatGPT-4 on the French Board of Plastic Reconstructive and Aesthetic Surgery written exam: a descriptive study.","authors":"Emma Dejean-Bouyer, Anoujat Kanlagna, François Thuau, Pierre Perrot, Ugo Lancien","doi":"10.3352/jeehp.2025.22.27","DOIUrl":"https://doi.org/10.3352/jeehp.2025.22.27","url":null,"abstract":"<p><strong>Purpose: </strong>This study aims to evaluate the performance of Chat Generative Pre-Trained Transformer 4 (ChatGPT-4) on the French Board of Plastic, Reconstructive, and Aesthetic Surgery written examination and to assess its role as a supplementary resource in helping medical students prepare for the qualification examination in plastic surgery.</p><p><strong>Methods: </strong>This descriptive study evaluated ChatGPT-4's performance on 213 items from the October 2024 French Board of Plastic, Reconstructive, and Aesthetic Surgery written examination. Responses were assessed for accuracy, logical reasoning, internal and external information use, and were categorized for fallacies by independent reviewers. Statistical analyses included chi-square tests and Fisher's exact test for significance.</p><p><strong>Results: </strong>ChatGPT-4 answered all questions across the 10 modules, achieving an overall accuracy rate of 77.5%. The model applied logical reasoning in 98.1% of the questions, utilized internal information in 94.4%, and incorporated external information in 91.1%.</p><p><strong>Conclusion: </strong>ChatGPT-4 performs satisfactorily on the French Board of Plastic, Reconstructive, and Aesthetic Surgery written examination. Its accuracy met the minimum passing standards for the exam. While responses generally align with expected knowledge, careful verification remains necessary, particularly for questions involving image interpretation. As artificial intelligence continues to evolve, ChatGPT-4 is expected to become an increasingly reliable tool for medical education. At present, it remains a valuable resource for assisting plastic surgery residents in their training.</p>","PeriodicalId":46098,"journal":{"name":"Journal of Educational Evaluation for Health Professions","volume":"22 ","pages":"27"},"PeriodicalIF":3.7,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145193239","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Halted medical education and medical residents’ training in Korea, journal metrics, and appreciation to reviewers and volunteers 停止在韩国的医学教育和住院医师培训、期刊指标以及对审稿人和志愿者的赞赏。
IF 9.3 Q1 EDUCATION, SCIENTIFIC DISCIPLINES Pub Date : 2025-01-01 Epub Date: 2025-01-13 DOI: 10.3352/jeehp.2025.22.1
Sun Huh
{"title":"Halted medical education and medical residents’ training in Korea, journal metrics, and appreciation to reviewers and volunteers","authors":"Sun Huh","doi":"10.3352/jeehp.2025.22.1","DOIUrl":"10.3352/jeehp.2025.22.1","url":null,"abstract":"","PeriodicalId":46098,"journal":{"name":"Journal of Educational Evaluation for Health Professions","volume":"22 ","pages":"1"},"PeriodicalIF":9.3,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11880820/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142980326","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Perceptions of faculty and medical students regarding an undergraduate research culture activity in Myanmar: a qualitative study. 教师和医科学生对缅甸本科生研究文化活动的看法:一项定性研究。
IF 3.7 Q1 EDUCATION, SCIENTIFIC DISCIPLINES Pub Date : 2025-01-01 Epub Date: 2025-10-27 DOI: 10.3352/jeehp.2025.22.33
Htain Lin Aung, Moe Oo Thant, July Maung Maung, Ye Hlaing Oo, Thin Thin Toe, Hla Moe

Purpose: This study explored the perceptions of faculty members and third-year medical students regarding the research culture activity (RCA), a program designed to engage undergraduates in research at the University of Medicine, Mandalay, Myanmar. It aimed to identify the knowledge, attitudes, and skills (KAS) gained, the challenges encountered, and suggestions for improvement.

Methods: This qualitative study employed 4 semi-structured focus group discussions with 17 third-year medical students and 16 faculty members who participated in the 2020 RCA. Student responses related to KAS were analyzed using a deductive framework approach, while challenges and suggestions were examined through inductive thematic analysis. Discussions were audio-recorded, transcribed verbatim in Burmese, translated into English, and collaboratively coded using Atlas.ti version 9.0.5.

Results: Participants reported improved understanding of scientific literature, greater responsibility, strengthened teamwork, and enhanced practical research skills. Reported challenges included limited research preparedness, scheduling conflicts, inconsistent supervision, financial constraints, and weak coordination with inpatient clinicians. Participants also suggested clearer guidelines, pre-research training, protected time, stronger supervision, and institutional budgetary support.

Conclusion: The RCA provides substantial educational value in developing research competencies and remains a promising, potentially adaptable model for resource-limited settings. Its sustainability will depend on institutional commitment, supervisory capacity, and modest financial investment. Future research should prospectively assess KAS outcomes, compare supervision models and group sizes, evaluate digital workflows for efficiency, and conduct long-term follow-up of graduates' scholarly activities to build evidence for scalable implementation.

目的:本研究探讨了教师和三年级医学生对研究文化活动(RCA)的看法,RCA是缅甸曼德勒医科大学旨在吸引本科生参与研究的一个项目。它旨在确定获得的知识、态度和技能(KAS)、遇到的挑战以及改进的建议。方法:采用4次半结构化焦点小组讨论,对参加2020年RCA的17名三年级医学生和16名教师进行定性研究。使用演绎框架方法分析了与KAS相关的学生反应,而通过归纳主题分析检查了挑战和建议。讨论被录音,用缅甸语逐字转录,翻译成英语,并使用Atlas进行协作编码。Ti版本9.0.5。结果:参与者报告说,他们对科学文献的理解有所提高,责任感更强,团队合作能力得到加强,实际研究技能得到提高。报告的挑战包括有限的研究准备、时间安排冲突、不一致的监督、财政限制以及与住院临床医生的协调不力。与会者还建议更明确的指导方针、研究前培训、保护时间、加强监督和机构预算支持。结论:RCA在发展研究能力方面提供了巨大的教育价值,并且在资源有限的环境中仍然是一个有前途的,具有潜在适应性的模型。其可持续性将取决于机构承诺、监管能力和适度的金融投资。未来的研究应前瞻性地评估KAS结果,比较监督模式和小组规模,评估数字化工作流程的效率,并对毕业生的学术活动进行长期随访,为可扩展的实施建立证据。
{"title":"Perceptions of faculty and medical students regarding an undergraduate research culture activity in Myanmar: a qualitative study.","authors":"Htain Lin Aung, Moe Oo Thant, July Maung Maung, Ye Hlaing Oo, Thin Thin Toe, Hla Moe","doi":"10.3352/jeehp.2025.22.33","DOIUrl":"10.3352/jeehp.2025.22.33","url":null,"abstract":"<p><strong>Purpose: </strong>This study explored the perceptions of faculty members and third-year medical students regarding the research culture activity (RCA), a program designed to engage undergraduates in research at the University of Medicine, Mandalay, Myanmar. It aimed to identify the knowledge, attitudes, and skills (KAS) gained, the challenges encountered, and suggestions for improvement.</p><p><strong>Methods: </strong>This qualitative study employed 4 semi-structured focus group discussions with 17 third-year medical students and 16 faculty members who participated in the 2020 RCA. Student responses related to KAS were analyzed using a deductive framework approach, while challenges and suggestions were examined through inductive thematic analysis. Discussions were audio-recorded, transcribed verbatim in Burmese, translated into English, and collaboratively coded using Atlas.ti version 9.0.5.</p><p><strong>Results: </strong>Participants reported improved understanding of scientific literature, greater responsibility, strengthened teamwork, and enhanced practical research skills. Reported challenges included limited research preparedness, scheduling conflicts, inconsistent supervision, financial constraints, and weak coordination with inpatient clinicians. Participants also suggested clearer guidelines, pre-research training, protected time, stronger supervision, and institutional budgetary support.</p><p><strong>Conclusion: </strong>The RCA provides substantial educational value in developing research competencies and remains a promising, potentially adaptable model for resource-limited settings. Its sustainability will depend on institutional commitment, supervisory capacity, and modest financial investment. Future research should prospectively assess KAS outcomes, compare supervision models and group sizes, evaluate digital workflows for efficiency, and conduct long-term follow-up of graduates' scholarly activities to build evidence for scalable implementation.</p>","PeriodicalId":46098,"journal":{"name":"Journal of Educational Evaluation for Health Professions","volume":"22 ","pages":"33"},"PeriodicalIF":3.7,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12768548/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145379151","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Correlation between a motion analysis method and Global Operative Assessment of Laparoscopic Skills for assessing interns' performance in a simulated peg transfer task in Jordan: a validation study 超越目标:在约旦学校用运动分析评估三维打印模型的微创手术技能:一项随机实验研究。
IF 9.3 Q1 EDUCATION, SCIENTIFIC DISCIPLINES Pub Date : 2025-01-01 Epub Date: 2025-03-06 DOI: 10.3352/jeehp.2025.22.10
Esraa Saleh Abdelall, Shadi Mohammad Hamouri, Abdallah Fawaz Al Dwairi, Omar Mefleh Al-Araidah

Purpose: This study aimed to validate the use of ProAnalyst (Xcitex Inc.), a program for professional motion analysts to assess the performance of surgical interns while performing the peg transfer task in a simulator box for safe practice in real minimally invasive surgery.

Methods: A correlation study was conducted in a multidisciplinary skills simulation lab at the Faculty of Medicine, Jordan University of Science and Technology from October 2019 to February 2020. Forty-one interns (i.e., novices and intermediates) were recruited, and an expert surgeon participated as a reference benchmark. Videos of participants’ performance were analyzed using ProAnalyst and the Global Operative Assessment of Laparoscopic Skills (GOALS). The two sets of results were analyzed to identify correlations.

Results: The motion analysis scores from Proanalyst were correlated with those from GOALS for efficiency (r=+0.38, P<0.05), autonomy (r=+0.63, P<0.01), depth perception (r=+0.43, P<0.05), dexterity (r=+0.71, P<0.001), and operation flow (r=+0.88, P<0.001). Both assessment methods differentiated the participants’ performance based on their experience level.

Conclusion: The motion analysis scoring method using Proanalyst provides an objective, time-efficient, and reproducible assessment of interns’ performance, with results comparable to those obtained using GOALS. It may require initial training and set-up; however, it eliminates the need for expert surgeon judgment.

目的:本研究探讨了在真实微创手术中,使用运动分析客观评估外科实习生在模拟盒中执行钉转移任务时的表现。方法:在一所医学院多学科技能模拟实验室进行了一项完全随机的研究。41名实习生(即新手和中级)和1名专家外科医生作为参考基准。通过提出的器械运动分析方法对参与者的表演视频进行分析,并与参与者通过全球腹腔镜手术技能评估(goal)获得的评分进行比较。所有资料均进行统计学分析。结果:运动分析得分得到验证,并与目标有良好的相关性。两种评估方法都根据参与者的经验水平来区分他们的表现。此外,两种评估方法都表明,新手的表现得分最低,其次是中级,而专家的得分最高。结论:运动分析计分法是一种客观、省时、可重复的实习生绩效评估方法。它可能需要最初的培训和设置;然而,它消除了专家外科医生判断的需要。
{"title":"Correlation between a motion analysis method and Global Operative Assessment of Laparoscopic Skills for assessing interns' performance in a simulated peg transfer task in Jordan: a validation study","authors":"Esraa Saleh Abdelall, Shadi Mohammad Hamouri, Abdallah Fawaz Al Dwairi, Omar Mefleh Al-Araidah","doi":"10.3352/jeehp.2025.22.10","DOIUrl":"10.3352/jeehp.2025.22.10","url":null,"abstract":"<p><strong>Purpose: </strong>This study aimed to validate the use of ProAnalyst (Xcitex Inc.), a program for professional motion analysts to assess the performance of surgical interns while performing the peg transfer task in a simulator box for safe practice in real minimally invasive surgery.</p><p><strong>Methods: </strong>A correlation study was conducted in a multidisciplinary skills simulation lab at the Faculty of Medicine, Jordan University of Science and Technology from October 2019 to February 2020. Forty-one interns (i.e., novices and intermediates) were recruited, and an expert surgeon participated as a reference benchmark. Videos of participants’ performance were analyzed using ProAnalyst and the Global Operative Assessment of Laparoscopic Skills (GOALS). The two sets of results were analyzed to identify correlations.</p><p><strong>Results: </strong>The motion analysis scores from Proanalyst were correlated with those from GOALS for efficiency (r=+0.38, P<0.05), autonomy (r=+0.63, P<0.01), depth perception (r=+0.43, P<0.05), dexterity (r=+0.71, P<0.001), and operation flow (r=+0.88, P<0.001). Both assessment methods differentiated the participants’ performance based on their experience level.</p><p><strong>Conclusion: </strong>The motion analysis scoring method using Proanalyst provides an objective, time-efficient, and reproducible assessment of interns’ performance, with results comparable to those obtained using GOALS. It may require initial training and set-up; however, it eliminates the need for expert surgeon judgment.</p>","PeriodicalId":46098,"journal":{"name":"Journal of Educational Evaluation for Health Professions","volume":"22 ","pages":"10"},"PeriodicalIF":9.3,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12012728/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143568494","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Simulation-based teaching versus traditional small group teaching for first-year medical students among high and low scorers in respiratory physiology, India: a randomized controlled trial. 印度呼吸生理学高分和低分医科一年级学生的模拟教学与传统小组教学:一项随机对照试验
IF 9.3 Q1 EDUCATION, SCIENTIFIC DISCIPLINES Pub Date : 2025-01-01 Epub Date: 2025-02-21 DOI: 10.3352/jeehp.2025.22.8
Nalini Yelahanka Channegowda, Dinker Ramanand Pai, Shivasakthy Manivasakan

Purpose: Although it is widely utilized in clinical subjects for skill training, using simulation-based education (SBE) for teaching basic science concepts to phase I medical students or pre-clinical students is limited. Simulation-based education/teaching is preferred in cardiovascular and respiratory physiology when compared to other systems because it is easy to recreate both the normal physiological component and alterations in the simulated environment, thus a promoting deep understanding of the core concepts.

Methods: A block randomized study was conducted among 107 phase 1 (first-year) medical undergraduate students at a Deemed to be University in India. Group A received SBE and Group B traditional small group teaching. The effectiveness of the teaching intervention was assessed using pre- and post-tests. Student feedback was obtained through a self administered structured questionnaire via an anonymous online survey and by in-depth interview.

Results: The intervention group showed a statistically significant improvement in post-test scores compared to the control group. A sub-analysis revealed that high scorers performed better than low scorers in both groups, but the knowledge gain among low scorers was more significant in the intervention group.

Conclusion: This teaching strategy offers a valuable supplement to traditional methods, fostering a deeper comprehension of clinical concepts from the outset of medical training.

目的:虽然在临床学科技能训练中被广泛应用,但对一期医学生或临床前学生进行基础科学概念教学的方法有限。与其他系统相比,以模拟为基础的教育/教学在心血管和呼吸生理学中更受欢迎,因为它很容易在模拟环境中重现正常的生理成分和变化,从而促进对核心概念的深入理解。方法:在印度一所被认为是大学的107名一期(一年级)医学本科生中进行了一项区域随机研究。A组采用SBE教学,B组采用传统的小组教学。教学干预的有效性通过前后测试进行评估。通过匿名在线调查和深度访谈,学生通过自我管理的结构化问卷获得反馈。结果:干预组在测试后得分上较对照组有显著改善。亚分析显示,两组中得分高的学生比得分低的学生表现得更好,但在干预组中,得分低的学生的知识获得更显著。结论:这种教学策略是对传统教学方法的有益补充,从医学培训一开始就培养学生对临床概念的深入理解。
{"title":"Simulation-based teaching versus traditional small group teaching for first-year medical students among high and low scorers in respiratory physiology, India: a randomized controlled trial.","authors":"Nalini Yelahanka Channegowda, Dinker Ramanand Pai, Shivasakthy Manivasakan","doi":"10.3352/jeehp.2025.22.8","DOIUrl":"https://doi.org/10.3352/jeehp.2025.22.8","url":null,"abstract":"<p><strong>Purpose: </strong>Although it is widely utilized in clinical subjects for skill training, using simulation-based education (SBE) for teaching basic science concepts to phase I medical students or pre-clinical students is limited. Simulation-based education/teaching is preferred in cardiovascular and respiratory physiology when compared to other systems because it is easy to recreate both the normal physiological component and alterations in the simulated environment, thus a promoting deep understanding of the core concepts.</p><p><strong>Methods: </strong>A block randomized study was conducted among 107 phase 1 (first-year) medical undergraduate students at a Deemed to be University in India. Group A received SBE and Group B traditional small group teaching. The effectiveness of the teaching intervention was assessed using pre- and post-tests. Student feedback was obtained through a self administered structured questionnaire via an anonymous online survey and by in-depth interview.</p><p><strong>Results: </strong>The intervention group showed a statistically significant improvement in post-test scores compared to the control group. A sub-analysis revealed that high scorers performed better than low scorers in both groups, but the knowledge gain among low scorers was more significant in the intervention group.</p><p><strong>Conclusion: </strong>This teaching strategy offers a valuable supplement to traditional methods, fostering a deeper comprehension of clinical concepts from the outset of medical training.</p>","PeriodicalId":46098,"journal":{"name":"Journal of Educational Evaluation for Health Professions","volume":"22 ","pages":"8"},"PeriodicalIF":9.3,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12012709/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144006065","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Presidential address 2025: expansion of computer-based testing from 12 to 27 health professions by 2027 and adoption of a large language model for item generation 2025年总统讲话:到2027年将基于计算机的测试从12个卫生专业扩大到27个,并采用大型语言模型生成项目。
IF 9.3 Q1 EDUCATION, SCIENTIFIC DISCIPLINES Pub Date : 2025-01-01 Epub Date: 2025-01-20 DOI: 10.3352/jeehp.2025.22.7
Hyunjoo Pai
{"title":"Presidential address 2025: expansion of computer-based testing from 12 to 27 health professions by 2027 and adoption of a large language model for item generation","authors":"Hyunjoo Pai","doi":"10.3352/jeehp.2025.22.7","DOIUrl":"10.3352/jeehp.2025.22.7","url":null,"abstract":"","PeriodicalId":46098,"journal":{"name":"Journal of Educational Evaluation for Health Professions","volume":"22 ","pages":"7"},"PeriodicalIF":9.3,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11934035/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143013023","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Performance of GPT-4o and o1-Pro on United Kingdom Medical Licensing Assessment-style items: a comparative study. gpt - 40和o1-Pro在英国医疗执照评估类项目中的表现:比较研究
IF 3.7 Q1 EDUCATION, SCIENTIFIC DISCIPLINES Pub Date : 2025-01-01 Epub Date: 2025-10-10 DOI: 10.3352/jeehp.2025.22.30
Behrad Vakili, Aadam Ahmad, Mahsa Zolfaghari

Purpose: Large language models (LLMs) such as ChatGPT, and their potential to support autonomous learning for licensing exams like the UK Medical Licensing Assessment (UKMLA), are of growing interest. However, empirical evaluations of artificial intelligence (AI) performance against the UKMLA standard remain limited.

Methods: We evaluated the performance of 2 recent ChatGPT versions, GPT-4o and o1-Pro, on a curated set of 374 UKMLA-style single-best-answer items spanning diverse medical specialties. Statistical comparisons using McNemar's test assessed the significance of differences between the 2 models. Specialties were analyzed to identify domain-specific variation. In addition, 20 image-based items were evaluated.

Results: GPT-4o achieved an accuracy of 88.8%, while o1-Pro achieved 93.0%. McNemar's test revealed a statistically significant difference in favor of o1-Pro. Across specialties, both models demonstrated excellent performance in surgery, psychiatry, and infectious diseases. Notable differences arose in dermatology, respiratory medicine, and imaging, where o1-Pro consistently outperformed GPT-4o. Nevertheless, isolated weaknesses in general practice were observed. The analysis of image-based items showed 75% accuracy for GPT-4o and 90% for o1-Pro (P=0.25).

Conclusion: ChatGPT shows strong potential as an adjunct learning tool for UKMLA preparation, with both models achieving scores above the calculated pass mark. This underscores the promise of advanced AI models in medical education. However, specialty-specific inconsistencies suggest AI tools should complement, rather than replace, traditional study methods.

目的:像ChatGPT这样的大型语言模型(llm),以及它们支持像英国医疗许可评估(UKMLA)这样的许可考试自主学习的潜力,正受到越来越多的关注。然而,针对UKMLA标准的人工智能(AI)性能的实证评估仍然有限。方法:我们评估了两个最新的ChatGPT版本,gpt - 40和o1-Pro,在374个ukmla风格的单一最佳答案项目上的性能,这些项目涵盖了不同的医学专业。使用McNemar检验进行统计比较,评估两个模型之间差异的显著性。分析特征以确定特定领域的变化。此外,还评估了20个基于图像的项目。结果:gpt - 40的准确率为88.8%,o1-Pro的准确率为93.0%。McNemar的测试显示了统计学上显著的差异,有利于o1-Pro。在专业方面,这两种模型在外科、精神病学和传染病方面表现出色。在皮肤病学、呼吸医学和影像学方面出现了显著的差异,在这些方面,o1-Pro的表现始终优于gpt - 40。然而,观察到一般做法中个别的弱点。基于图像的项目分析显示gpt - 40的准确率为75%,01 - pro的准确率为90% (P=0.25)。结论:ChatGPT作为UKMLA准备的辅助学习工具具有很强的潜力,两个模型的得分都在计算的及格分以上。这凸显了先进的人工智能模型在医学教育中的前景。然而,特定专业的不一致性表明,人工智能工具应该补充而不是取代传统的学习方法。
{"title":"Performance of GPT-4o and o1-Pro on United Kingdom Medical Licensing Assessment-style items: a comparative study.","authors":"Behrad Vakili, Aadam Ahmad, Mahsa Zolfaghari","doi":"10.3352/jeehp.2025.22.30","DOIUrl":"10.3352/jeehp.2025.22.30","url":null,"abstract":"<p><strong>Purpose: </strong>Large language models (LLMs) such as ChatGPT, and their potential to support autonomous learning for licensing exams like the UK Medical Licensing Assessment (UKMLA), are of growing interest. However, empirical evaluations of artificial intelligence (AI) performance against the UKMLA standard remain limited.</p><p><strong>Methods: </strong>We evaluated the performance of 2 recent ChatGPT versions, GPT-4o and o1-Pro, on a curated set of 374 UKMLA-style single-best-answer items spanning diverse medical specialties. Statistical comparisons using McNemar's test assessed the significance of differences between the 2 models. Specialties were analyzed to identify domain-specific variation. In addition, 20 image-based items were evaluated.</p><p><strong>Results: </strong>GPT-4o achieved an accuracy of 88.8%, while o1-Pro achieved 93.0%. McNemar's test revealed a statistically significant difference in favor of o1-Pro. Across specialties, both models demonstrated excellent performance in surgery, psychiatry, and infectious diseases. Notable differences arose in dermatology, respiratory medicine, and imaging, where o1-Pro consistently outperformed GPT-4o. Nevertheless, isolated weaknesses in general practice were observed. The analysis of image-based items showed 75% accuracy for GPT-4o and 90% for o1-Pro (P=0.25).</p><p><strong>Conclusion: </strong>ChatGPT shows strong potential as an adjunct learning tool for UKMLA preparation, with both models achieving scores above the calculated pass mark. This underscores the promise of advanced AI models in medical education. However, specialty-specific inconsistencies suggest AI tools should complement, rather than replace, traditional study methods.</p>","PeriodicalId":46098,"journal":{"name":"Journal of Educational Evaluation for Health Professions","volume":"22 ","pages":"30"},"PeriodicalIF":3.7,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12688319/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145259626","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Performance of large language models in medical licensing examinations: a systematic review and meta-analysis. 大型语言模型在医师执照考试中的表现:系统回顾和荟萃分析。
IF 3.7 Q1 EDUCATION, SCIENTIFIC DISCIPLINES Pub Date : 2025-01-01 Epub Date: 2025-11-18 DOI: 10.3352/jeehp.2025.22.36
Haniyeh Nouri, Abdollah Mahdavi, Ali Abedi, Alireza Mohammadnia, Mahnaz Hamedan, Masoud Amanzadeh

Purpose: This study systematically evaluates and compares the performance of large language models (LLMs) in answering medical licensing examination questions. By conducting subgroup analyses based on language, question format, and model type, this meta-analysis aims to provide a comprehensive overview of LLM capabilities in medical education and clinical decision-making.

Methods: This systematic review, registered in PROSPERO and following PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) guidelines, searched MEDLINE (PubMed), Scopus, and Web of Science for relevant articles published up to February 1, 2025. The search strategy included Medical Subject Headings (MeSH) terms and keywords related to ("ChatGPT" OR "GPT" OR "LLM variants") AND ("medical licensing exam*" OR "medical exam*" OR "medical education" OR "radiology exam*"). Eligible studies evaluated LLM accuracy on medical licensing examination questions. Pooled accuracy was estimated using a random-effects model, with subgroup analyses by LLM type, language, and question format. Publication bias was assessed using Egger's regression test.

Results: This systematic review identified 2,404 studies. After removing duplicates and excluding irrelevant articles through title and abstract screening, 36 studies were included after full-text review. The pooled accuracy was 72% (95% confidence interval, 70.0% to 75.0%) with high heterogeneity (I2=99%, P<0.001). Among LLMs, GPT-4 achieved the highest accuracy (81%), followed by Bing (79%), Claude (74%), Gemini/Bard (70%), and GPT-3.5 (60%) (P=0.001). Performance differences across languages (range, 62% in Polish to 77% in German) were not statistically significant (P=0.170).

Conclusion: LLMs, particularly GPT-4, can match or exceed medical students' examination performance and may serve as supportive educational tools. However, due to variability and the risk of errors, they should be used cautiously as complements rather than replacements for traditional learning methods.

目的:本研究系统评价和比较大语言模型(llm)在回答医师执照考试问题中的表现。通过基于语言、问题格式和模型类型的分组分析,本荟萃分析旨在全面概述法学硕士在医学教育和临床决策方面的能力。方法:本系统综述在PROSPERO注册,并遵循PRISMA(系统综述和荟萃分析的首选报告项目)指南,检索MEDLINE (PubMed)、Scopus和Web of Science,查找截至2025年2月1日发表的相关文章。搜索策略包括与(“ChatGPT”或“GPT”或“LLM变体”)和(“医学执照考试*”或“医学考试*”或“医学教育”或“放射学考试*”)相关的医学主题标题(MeSH)术语和关键字。合格的研究评估了LLM在医疗执照考试问题上的准确性。使用随机效应模型估计汇总准确性,并根据LLM类型、语言和问题格式进行亚组分析。采用Egger回归检验评估发表偏倚。结果:本系统综述确定了2404项研究。在通过标题和摘要筛选去除重复和排除不相关的文章后,全文审查后纳入了36项研究。结论:法学硕士,特别是GPT-4,可以达到或超过医学生的考试成绩,可以作为辅助教育工具。然而,由于可变性和错误的风险,它们应该谨慎地作为传统学习方法的补充而不是替代。
{"title":"Performance of large language models in medical licensing examinations: a systematic review and meta-analysis.","authors":"Haniyeh Nouri, Abdollah Mahdavi, Ali Abedi, Alireza Mohammadnia, Mahnaz Hamedan, Masoud Amanzadeh","doi":"10.3352/jeehp.2025.22.36","DOIUrl":"10.3352/jeehp.2025.22.36","url":null,"abstract":"<p><strong>Purpose: </strong>This study systematically evaluates and compares the performance of large language models (LLMs) in answering medical licensing examination questions. By conducting subgroup analyses based on language, question format, and model type, this meta-analysis aims to provide a comprehensive overview of LLM capabilities in medical education and clinical decision-making.</p><p><strong>Methods: </strong>This systematic review, registered in PROSPERO and following PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) guidelines, searched MEDLINE (PubMed), Scopus, and Web of Science for relevant articles published up to February 1, 2025. The search strategy included Medical Subject Headings (MeSH) terms and keywords related to (\"ChatGPT\" OR \"GPT\" OR \"LLM variants\") AND (\"medical licensing exam*\" OR \"medical exam*\" OR \"medical education\" OR \"radiology exam*\"). Eligible studies evaluated LLM accuracy on medical licensing examination questions. Pooled accuracy was estimated using a random-effects model, with subgroup analyses by LLM type, language, and question format. Publication bias was assessed using Egger's regression test.</p><p><strong>Results: </strong>This systematic review identified 2,404 studies. After removing duplicates and excluding irrelevant articles through title and abstract screening, 36 studies were included after full-text review. The pooled accuracy was 72% (95% confidence interval, 70.0% to 75.0%) with high heterogeneity (I2=99%, P<0.001). Among LLMs, GPT-4 achieved the highest accuracy (81%), followed by Bing (79%), Claude (74%), Gemini/Bard (70%), and GPT-3.5 (60%) (P=0.001). Performance differences across languages (range, 62% in Polish to 77% in German) were not statistically significant (P=0.170).</p><p><strong>Conclusion: </strong>LLMs, particularly GPT-4, can match or exceed medical students' examination performance and may serve as supportive educational tools. However, due to variability and the risk of errors, they should be used cautiously as complements rather than replacements for traditional learning methods.</p>","PeriodicalId":46098,"journal":{"name":"Journal of Educational Evaluation for Health Professions","volume":"22 ","pages":"36"},"PeriodicalIF":3.7,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145542995","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Comparing generative artificial intelligence platforms and nursing student performance on a women's health nursing examination in Korea: a Rasch model approach. 比较生成人工智能平台和护理学生在韩国女性健康护理考试中的表现:Rasch模型方法。
IF 3.7 Q1 EDUCATION, SCIENTIFIC DISCIPLINES Pub Date : 2025-01-01 Epub Date: 2025-09-05 DOI: 10.3352/jeehp.2025.22.23
Eun Jeong Ko, Tae Kyung Lee, Geum Hee Jeong

Purpose: This psychometric study aimed to compare the ability parameter estimates of generative artificial intelligence (AI) platforms with those of nursing students on a 50-item women's health nursing examination at Hallym University, Korea, using the Rasch model. It also sought to estimate item difficulty parameters and evaluate AI performance across varying difficulty levels.

Methods: The exam, consisting of 39 multiple-choice items and 11 true/false items, was administered to 111 fourth-year nursing students in June 2023. In December 2024, 6 generative AI platforms (GPT-4o, ChatGPT Free, Claude.ai, Clova X, Mistral.ai, Google Gemini) completed the same items. The responses were analyzed using the Rasch model to estimate the ability and difficulty parameters. Unidimensionality was verified by the Dimensionality Evaluation to Enumerate Contributing Traits (DETECT), and analyses were conducted using the R packages irtQ and TAM.

Results: The items satisfied unidimensionality (DETECT=-0.16). Item difficulty parameter estimates ranged from -3.87 to 1.96 logits (mean=-0.61), with a mean difficulty index of 0.79. Examinees' ability parameter estimates ranged from -0.71 to 3.14 logits (mean=1.17). GPT-4o, ChatGPT Free, and Claude.ai outperformed the median student ability (1.09 logits), scoring 2.68, 2.34, and 2.34, respectively, while Clova X, Mistral.ai, and Google Gemini exhibited lower scores (0.20, -0.12, 0.80). The test information curve peaked below θ=0, indicating suitability for examinees with low to average ability.

Conclusion: Advanced generative AI platforms approximated the performance of high-performing students, but outcomes varied. The Rasch model effectively evaluated AI competency, supporting its potential utility for future AI performance assessments in nursing education.

目的:本心理测量学研究旨在比较生成式人工智能(AI)平台与韩国翰林大学护理专业学生在50项女性健康护理考试中的能力参数估计,使用Rasch模型。它还试图估算道具难度参数并评估AI在不同难度级别中的表现。方法:于2023年6月对111名护理专业四年级学生进行问卷调查,包括39项选择题和11项真假题。2024年12月,6个生成式AI平台(gpt - 40、ChatGPT Free、Claude。ai, Clova X, Mistral。ai, b谷歌Gemini)完成了相同的项目。使用Rasch模型对响应进行分析,以估计能力和难度参数。通过维数评估来枚举贡献性状(DETECT)验证单维性,并使用R包irtQ和TAM进行分析。结果:项目满足单维性(DETECT=-0.16)。道具难度参数估计范围从-3.87到1.96 logits(平均=-0.61),平均难度指数为0.79。考生的能力参数估计值范围为-0.71 ~ 3.14 logits(平均=1.17)。gpt - 40、ChatGPT Free和Claude。ai的表现优于学生能力中位数(1.09 logits),得分分别为2.68、2.34和2.34,而Clova X、Mistral。双子座的得分较低(0.20,-0.12,0.80)。测试信息曲线在θ=0以下达到峰值,表明适合低到中等水平的考生。结论:先进的生成式人工智能平台近似于优秀学生的表现,但结果有所不同。Rasch模型有效地评估了人工智能的能力,支持其在护理教育中未来人工智能绩效评估的潜在效用。
{"title":"Comparing generative artificial intelligence platforms and nursing student performance on a women's health nursing examination in Korea: a Rasch model approach.","authors":"Eun Jeong Ko, Tae Kyung Lee, Geum Hee Jeong","doi":"10.3352/jeehp.2025.22.23","DOIUrl":"10.3352/jeehp.2025.22.23","url":null,"abstract":"<p><strong>Purpose: </strong>This psychometric study aimed to compare the ability parameter estimates of generative artificial intelligence (AI) platforms with those of nursing students on a 50-item women's health nursing examination at Hallym University, Korea, using the Rasch model. It also sought to estimate item difficulty parameters and evaluate AI performance across varying difficulty levels.</p><p><strong>Methods: </strong>The exam, consisting of 39 multiple-choice items and 11 true/false items, was administered to 111 fourth-year nursing students in June 2023. In December 2024, 6 generative AI platforms (GPT-4o, ChatGPT Free, Claude.ai, Clova X, Mistral.ai, Google Gemini) completed the same items. The responses were analyzed using the Rasch model to estimate the ability and difficulty parameters. Unidimensionality was verified by the Dimensionality Evaluation to Enumerate Contributing Traits (DETECT), and analyses were conducted using the R packages irtQ and TAM.</p><p><strong>Results: </strong>The items satisfied unidimensionality (DETECT=-0.16). Item difficulty parameter estimates ranged from -3.87 to 1.96 logits (mean=-0.61), with a mean difficulty index of 0.79. Examinees' ability parameter estimates ranged from -0.71 to 3.14 logits (mean=1.17). GPT-4o, ChatGPT Free, and Claude.ai outperformed the median student ability (1.09 logits), scoring 2.68, 2.34, and 2.34, respectively, while Clova X, Mistral.ai, and Google Gemini exhibited lower scores (0.20, -0.12, 0.80). The test information curve peaked below θ=0, indicating suitability for examinees with low to average ability.</p><p><strong>Conclusion: </strong>Advanced generative AI platforms approximated the performance of high-performing students, but outcomes varied. The Rasch model effectively evaluated AI competency, supporting its potential utility for future AI performance assessments in nursing education.</p>","PeriodicalId":46098,"journal":{"name":"Journal of Educational Evaluation for Health Professions","volume":"22 ","pages":"23"},"PeriodicalIF":3.7,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12770907/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145151345","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Journal of Educational Evaluation for Health Professions
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1