首页 > 最新文献

Journal of Educational Evaluation for Health Professions最新文献

英文 中文
Challenges and potential improvements in the Accreditation Standards of the Korean Institute of Medical Education and Evaluation 2019 (ASK2019) derived through meta-evaluation: a cross-sectional study 通过荟萃评估得出的《2019 年韩国医学教育与评估研究院认证标准》(ASK2019)的挑战和潜在改进:一项横断面研究。
IF 4.4 Q1 EDUCATION, SCIENTIFIC DISCIPLINES Pub Date : 2024-01-01 Epub Date: 2024-04-02 DOI: 10.3352/jeehp.2024.21.8
Yoonjung Lee, Min-jung Lee, Junmoo Ahn, Chungwon Ha, Ye Ji Kang, Cheol Woong Jung, Dong-Mi Yoo, Jihye Yu, Seung-Hee Lee

Purpose: This study aimed to identify challenges and potential improvements in Korea’s medical education accreditation process according to the Accreditation Standards of the Korean Institute of Medical Education and Evaluation 2019 (ASK2019). Meta-evaluation was conducted to survey the experiences and perceptions of stakeholders, including self-assessment committee members, site visit committee members, administrative staff, and medical school professors.

Methods: A cross-sectional study was conducted using surveys sent to 40 medical schools. The 332 participants included self-assessment committee members, site visit team members, administrative staff, and medical school professors. The t-test, one-way analysis of variance and the chi-square test were used to analyze and compare opinions on medical education accreditation between the categories of participants.

Results: Site visit committee members placed greater importance on the necessity of accreditation than faculty members. A shared positive view on accreditation’s role in improving educational quality was seen among self-evaluation committee members and professors. Administrative staff highly regarded the Korean Institute of Medical Education and Evaluation’s reliability and objectivity, unlike the self-evaluation committee members. Site visit committee members positively perceived the clarity of accreditation standards, differing from self-assessment committee members. Administrative staff were most optimistic about implementing standards. However, the accreditation process encountered challenges, especially in duplicating content and preparing self-evaluation reports. Finally, perceptions regarding the accuracy of final site visit reports varied significantly between the self-evaluation committee members and the site visit committee members.

Conclusion: This study revealed diverse views on medical education accreditation, highlighting the need for improved communication, expectation alignment, and stakeholder collaboration to refine the accreditation process and quality.

目的:本研究旨在根据《2019 年韩国医学教育与评估研究院评审标准》(ASK2019),确定韩国医学教育评审过程中的挑战和潜在改进措施。通过元评价,调查了包括自评委员会成员、现场考察委员会成员、行政人员和医学教授在内的利益相关者的经验和看法:向 40 所医学院校发送了调查问卷,开展了一项横断面研究。332名参与者包括自评委员会成员、现场考察小组成员、行政人员和医学院教授。采用 t 检验、单因素方差分析和卡方检验来分析和比较各类参与者对医学教育评审的意见:结果:现场考察委员会成员比教职员工更重视评审的必要性。自我评估委员会成员和教授都对评审在提高教育质量方面的作用持积极态度。与自我评估委员会成员不同,行政人员高度评价韩国医学教育与评价院的可靠性和客观性。实地考察评估人员对评审标准的清晰度持肯定态度,这一点与自我评估委员会成员不同。行政人员对标准的实施最为乐观。然而,评审过程遇到了挑战,特别是在重复内容和编写自我评估报告方面。最后,自我评估委员会成员和现场考察委员会成员对最终现场考察报告准确性的看法存在很大差异:本研究揭示了对医学教育评审的不同看法,强调了加强沟通、调整期望值和利益相关者合作以完善评审过程和提高评审质量的必要性。
{"title":"Challenges and potential improvements in the Accreditation Standards of the Korean Institute of Medical Education and Evaluation 2019 (ASK2019) derived through meta-evaluation: a cross-sectional study","authors":"Yoonjung Lee, Min-jung Lee, Junmoo Ahn, Chungwon Ha, Ye Ji Kang, Cheol Woong Jung, Dong-Mi Yoo, Jihye Yu, Seung-Hee Lee","doi":"10.3352/jeehp.2024.21.8","DOIUrl":"10.3352/jeehp.2024.21.8","url":null,"abstract":"<p><strong>Purpose: </strong>This study aimed to identify challenges and potential improvements in Korea’s medical education accreditation process according to the Accreditation Standards of the Korean Institute of Medical Education and Evaluation 2019 (ASK2019). Meta-evaluation was conducted to survey the experiences and perceptions of stakeholders, including self-assessment committee members, site visit committee members, administrative staff, and medical school professors.</p><p><strong>Methods: </strong>A cross-sectional study was conducted using surveys sent to 40 medical schools. The 332 participants included self-assessment committee members, site visit team members, administrative staff, and medical school professors. The t-test, one-way analysis of variance and the chi-square test were used to analyze and compare opinions on medical education accreditation between the categories of participants.</p><p><strong>Results: </strong>Site visit committee members placed greater importance on the necessity of accreditation than faculty members. A shared positive view on accreditation’s role in improving educational quality was seen among self-evaluation committee members and professors. Administrative staff highly regarded the Korean Institute of Medical Education and Evaluation’s reliability and objectivity, unlike the self-evaluation committee members. Site visit committee members positively perceived the clarity of accreditation standards, differing from self-assessment committee members. Administrative staff were most optimistic about implementing standards. However, the accreditation process encountered challenges, especially in duplicating content and preparing self-evaluation reports. Finally, perceptions regarding the accuracy of final site visit reports varied significantly between the self-evaluation committee members and the site visit committee members.</p><p><strong>Conclusion: </strong>This study revealed diverse views on medical education accreditation, highlighting the need for improved communication, expectation alignment, and stakeholder collaboration to refine the accreditation process and quality.</p>","PeriodicalId":46098,"journal":{"name":"Journal of Educational Evaluation for Health Professions","volume":"21 ","pages":"8"},"PeriodicalIF":4.4,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11108703/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140337062","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The effect of simulation-based training on problem-solving skills, critical thinking skills, and self-efficacy among nursing students in Vietnam: a before-and-after study. 模拟训练对越南护理专业学生解决问题能力、批判性思维能力和自我效能感的影响:一项前后对比研究。
IF 9.3 Q1 EDUCATION, SCIENTIFIC DISCIPLINES Pub Date : 2024-01-01 Epub Date: 2024-09-23 DOI: 10.3352/jeehp.2024.21.24
Tran Thi Hoang Oanh, Luu Thi Thuy, Ngo Thi Thu Huyen

Purpose: This study investigated the effect of simulation-based training on nursing students’ problem-solving skills, critical thinking skills, and self-efficacy.

Methods: A single-group pretest and posttest study was conducted among 173 second-year nursing students at a public university in Vietnam from May 2021 to July 2022. Each student participated in the adult nursing preclinical practice course, which utilized a moderate-fidelity simulation teaching approach. Instruments including the Personal Problem-Solving Inventory Scale, Critical Thinking Skills Questionnaire, and General Self-Efficacy Questionnaire were employed to measure participants’ problem-solving skills, critical thinking skills, and self-efficacy. Data were analyzed using descriptive statistics and the paired-sample t-test with the significance level set at P<0.05.

Results: The mean score of the Personal Problem-Solving Inventory posttest (127.24±12.11) was lower than the pretest score (131.42±16.95), suggesting an improvement in the problem-solving skills of the participants (t172 =2.55, P=0.011). There was no statistically significant difference in critical thinking skills between the pretest and posttest (P=0.854). Self-efficacy among nursing students showed a substantial increase from the pretest (27.91±5.26) to the posttest (28.71±3.81), with t172 =-2.26 and P=0.025.

Conclusion: The results suggest that simulation-based training can improve problem-solving skills and increase self-efficacy among nursing students. Therefore, the integration of simulation-based training in nursing education is recommended.

目的:本研究探讨了模拟训练对护理专业学生解决问题的能力、批判性思维能力和自我效能感的影响:方法:2021 年 5 月至 2022 年 7 月,在越南一所公立大学的 173 名护理专业二年级学生中开展了一项单组前测后测研究。每位学生都参加了成人护理临床前实践课程,该课程采用了中度保真模拟教学法。研究采用了个人问题解决量表、批判性思维能力问卷和一般自我效能感问卷等工具来测量参与者的问题解决能力、批判性思维能力和自我效能感。数据分析采用描述性统计和配对样本 t 检验,显著性水平为 PResults:个人问题解决量表》后测平均分(127.24±12.11)低于前测平均分(131.42±16.95),表明参与者的问题解决能力有所提高(t172=2.55,P=0.011)。批判性思维能力在前测和后测之间没有统计学差异(P=0.854)。护生的自我效能感从前测(27.91±5.26)到后测(28.71±3.81)有大幅提高,t172=-2.26,P=0.025:结果表明,模拟训练可以提高护生解决问题的能力,增强自我效能感。因此,建议在护理教学中融入模拟训练。
{"title":"The effect of simulation-based training on problem-solving skills, critical thinking skills, and self-efficacy among nursing students in Vietnam: a before-and-after study.","authors":"Tran Thi Hoang Oanh, Luu Thi Thuy, Ngo Thi Thu Huyen","doi":"10.3352/jeehp.2024.21.24","DOIUrl":"10.3352/jeehp.2024.21.24","url":null,"abstract":"<p><strong>Purpose: </strong>This study investigated the effect of simulation-based training on nursing students’ problem-solving skills, critical thinking skills, and self-efficacy.</p><p><strong>Methods: </strong>A single-group pretest and posttest study was conducted among 173 second-year nursing students at a public university in Vietnam from May 2021 to July 2022. Each student participated in the adult nursing preclinical practice course, which utilized a moderate-fidelity simulation teaching approach. Instruments including the Personal Problem-Solving Inventory Scale, Critical Thinking Skills Questionnaire, and General Self-Efficacy Questionnaire were employed to measure participants’ problem-solving skills, critical thinking skills, and self-efficacy. Data were analyzed using descriptive statistics and the paired-sample t-test with the significance level set at P<0.05.</p><p><strong>Results: </strong>The mean score of the Personal Problem-Solving Inventory posttest (127.24±12.11) was lower than the pretest score (131.42±16.95), suggesting an improvement in the problem-solving skills of the participants (t172 =2.55, P=0.011). There was no statistically significant difference in critical thinking skills between the pretest and posttest (P=0.854). Self-efficacy among nursing students showed a substantial increase from the pretest (27.91±5.26) to the posttest (28.71±3.81), with t172 =-2.26 and P=0.025.</p><p><strong>Conclusion: </strong>The results suggest that simulation-based training can improve problem-solving skills and increase self-efficacy among nursing students. Therefore, the integration of simulation-based training in nursing education is recommended.</p>","PeriodicalId":46098,"journal":{"name":"Journal of Educational Evaluation for Health Professions","volume":"21 ","pages":"24"},"PeriodicalIF":9.3,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11480641/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142298256","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Performance of GPT-3.5 and GPT-4 on standardized urology knowledge assessment items in the United States: a descriptive study. 美国 GPT-3.5 和 GPT-4 在标准化泌尿科知识评估项目上的表现:一项描述性研究。
IF 9.3 Q1 EDUCATION, SCIENTIFIC DISCIPLINES Pub Date : 2024-01-01 Epub Date: 2024-07-08 DOI: 10.3352/jeehp.2024.21.17
Max Samuel Yudovich, Elizaveta Makarova, Christian Michael Hague, Jay Dilip Raman

Purpose: This study aimed to evaluate the performance of Chat Generative Pre-Trained Transformer (ChatGPT) with respect to standardized urology multiple-choice items in the United States.

Methods: In total, 700 multiple-choice urology board exam-style items were submitted to GPT-3.5 and GPT-4, and responses were recorded. Items were categorized based on topic and question complexity (recall, interpretation, and problem-solving). The accuracy of GPT-3.5 and GPT-4 was compared across item types in February 2024.

Results: GPT-4 answered 44.4% of items correctly compared to 30.9% for GPT-3.5 (P>0.0001). GPT-4 (vs. GPT-3.5) had higher accuracy with urologic oncology (43.8% vs. 33.9%, P=0.03), sexual medicine (44.3% vs. 27.8%, P=0.046), and pediatric urology (47.1% vs. 27.1%, P=0.012) items. Endourology (38.0% vs. 25.7%, P=0.15), reconstruction and trauma (29.0% vs. 21.0%, P=0.41), and neurourology (49.0% vs. 33.3%, P=0.11) items did not show significant differences in performance across versions. GPT-4 also outperformed GPT-3.5 with respect to recall (45.9% vs. 27.4%, P<0.00001), interpretation (45.6% vs. 31.5%, P=0.0005), and problem-solving (41.8% vs. 34.5%, P=0.56) type items. This difference was not significant for the higher-complexity items.

Conclusion: s: ChatGPT performs relatively poorly on standardized multiple-choice urology board exam-style items, with GPT-4 outperforming GPT-3.5. The accuracy was below the proposed minimum passing standards for the American Board of Urology's Continuing Urologic Certification knowledge reinforcement activity (60%). As artificial intelligence progresses in complexity, ChatGPT may become more capable and accurate with respect to board examination items. For now, its responses should be scrutinized.

目的:本研究旨在评估 Chat Generative Pre-Trained Transformer(ChatGPT)在美国标准化泌尿科选择题方面的性能:共向 GPT-3.5 和 GPT-4 提交了 700 个泌尿外科委员会考试类型的多项选择题,并记录了答案。根据题目和问题复杂程度(回忆、解释和解决问题)对项目进行分类。2024 年 2 月,比较了 GPT-3.5 和 GPT-4 在不同项目类型中的准确性:结果:GPT-4 回答正确率为 44.4%,而 GPT-3.5 为 30.9%(P>0.0001)。GPT-4(vs.GPT-3.5)在泌尿肿瘤学(43.8% vs. 33.9%,P=0.03)、性医学(44.3% vs. 27.8%,P=0.046)和小儿泌尿学(47.1% vs. 27.1%,P=0.012)项目上的准确率更高。内泌尿学(38.0% vs. 25.7%,P=0.15)、重建与创伤(29.0% vs. 21.0%,P=0.41)和神经泌尿学(49.0% vs. 33.3%,P=0.11)项目在不同版本中的表现没有显著差异。在回忆率方面,GPT-4 也优于 GPT-3.5(45.9% 对 27.4%,P=0.41):ChatGPT 在标准化的泌尿外科医师资格考试多选题上的表现相对较差,GPT-4 的表现优于 GPT-3.5。准确率低于美国泌尿外科委员会泌尿外科继续认证知识强化活动的最低合格标准(60%)。随着人工智能复杂性的提高,ChatGPT 在委员会考试项目方面的能力和准确性可能会越来越高。就目前而言,应该对它的回答进行仔细检查。
{"title":"Performance of GPT-3.5 and GPT-4 on standardized urology knowledge assessment items in the United States: a descriptive study.","authors":"Max Samuel Yudovich, Elizaveta Makarova, Christian Michael Hague, Jay Dilip Raman","doi":"10.3352/jeehp.2024.21.17","DOIUrl":"https://doi.org/10.3352/jeehp.2024.21.17","url":null,"abstract":"<p><strong>Purpose: </strong>This study aimed to evaluate the performance of Chat Generative Pre-Trained Transformer (ChatGPT) with respect to standardized urology multiple-choice items in the United States.</p><p><strong>Methods: </strong>In total, 700 multiple-choice urology board exam-style items were submitted to GPT-3.5 and GPT-4, and responses were recorded. Items were categorized based on topic and question complexity (recall, interpretation, and problem-solving). The accuracy of GPT-3.5 and GPT-4 was compared across item types in February 2024.</p><p><strong>Results: </strong>GPT-4 answered 44.4% of items correctly compared to 30.9% for GPT-3.5 (P>0.0001). GPT-4 (vs. GPT-3.5) had higher accuracy with urologic oncology (43.8% vs. 33.9%, P=0.03), sexual medicine (44.3% vs. 27.8%, P=0.046), and pediatric urology (47.1% vs. 27.1%, P=0.012) items. Endourology (38.0% vs. 25.7%, P=0.15), reconstruction and trauma (29.0% vs. 21.0%, P=0.41), and neurourology (49.0% vs. 33.3%, P=0.11) items did not show significant differences in performance across versions. GPT-4 also outperformed GPT-3.5 with respect to recall (45.9% vs. 27.4%, P<0.00001), interpretation (45.6% vs. 31.5%, P=0.0005), and problem-solving (41.8% vs. 34.5%, P=0.56) type items. This difference was not significant for the higher-complexity items.</p><p><strong>Conclusion: </strong>s: ChatGPT performs relatively poorly on standardized multiple-choice urology board exam-style items, with GPT-4 outperforming GPT-3.5. The accuracy was below the proposed minimum passing standards for the American Board of Urology's Continuing Urologic Certification knowledge reinforcement activity (60%). As artificial intelligence progresses in complexity, ChatGPT may become more capable and accurate with respect to board examination items. For now, its responses should be scrutinized.</p>","PeriodicalId":46098,"journal":{"name":"Journal of Educational Evaluation for Health Professions","volume":"21 ","pages":"17"},"PeriodicalIF":9.3,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141560038","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Discovering social learning ecosystems during clinical clerkship from United States medical students’ feedback encounters: a content analysis. 从美国医科学生的反馈遭遇中发现临床实习期间的社会学习生态系统:内容分析。
IF 4.4 Q1 EDUCATION, SCIENTIFIC DISCIPLINES Pub Date : 2024-01-01 Epub Date: 2024-02-28 DOI: 10.3352/jeehp.2024.21.5
Anna Therese Cianciolo, Heeyoung Han, Lydia Anne Howes, Debra Lee Klamen, Sophia Matos

Purpose: We examined United States medical students’ self-reported feedback encounters during clerkship training to better understand in situ feedback practices. Specifically, we asked: Who do students receive feedback from, about what, when, where, and how do they use it? We explored whether curricular expectations for preceptors’ written commentary aligned with feedback as it occurs naturalistically in the workplace.

Methods: This study occurred from July 2021 to February 2022 at Southern Illinois University School of Medicine. We used qualitative survey-based experience sampling to gather students’ accounts of their feedback encounters in 8 core specialties. We analyzed the who, what, when, where, and why of 267 feedback encounters reported by 11 clerkship students over 30 weeks. Code frequencies were mapped qualitatively to explore patterns in feedback encounters.

Results: Clerkship feedback occurs in patterns apparently related to the nature of clinical work in each specialty. These patterns may be attributable to each specialty’s “social learning ecosystem”—the distinctive learning environment shaped by the social and material aspects of a given specialty’s work, which determine who preceptors are, what students do with preceptors, and what skills or attributes matter enough to preceptors to comment on.

Conclusion: Comprehensive, standardized expectations for written feedback across specialties conflict with the reality of workplace-based learning. Preceptors may be better able—and more motivated—to document student performance that occurs as a natural part of everyday work. Nurturing social learning ecosystems could facilitate workplace-based learning such that, across specialties, students acquire a comprehensive clinical skillset appropriate for graduation.

目的:我们研究了美国医科学生在实习培训期间自我报告的反馈情况,以更好地了解现场反馈实践。具体来说,我们询问学生从谁那里获得反馈,反馈的内容、时间、地点以及他们如何使用反馈?我们探讨了课程对实习指导教师书面评语的期望是否与工作场所自然发生的反馈一致:本研究于 2021 年 7 月至 2022 年 2 月在南伊利诺伊大学医学院进行。我们使用基于定性调查的经验抽样,收集了学生在 8 个核心专业中遇到的反馈情况。我们分析了 11 名实习学生在 30 周内报告的 267 次反馈中的 "谁"、"什么"、"何时"、"何地 "和 "为什么"。我们对代码频率进行了定性映射,以探索反馈遭遇的模式:结果:实习反馈的模式显然与各专业临床工作的性质有关。这些模式可能归因于每个专科的社会学习生态系统--由特定专科工作的社会和物质方面所形成的独特的学习环境,这些方面决定了谁是实习医生、学生与实习医生一起做了什么,以及哪些技能或特质对实习医生来说足够重要,以至于需要进行评论:结论:各专业对书面反馈的全面、标准化要求与基于工作场所的学习现实相冲突。作为日常工作的自然组成部分,实习医生可能更有能力、也更有动力记录学生的表现。培养社会学习生态系统可以促进以工作场所为基础的学习,从而使不同专业的学生获得适合毕业的全面临床技能。
{"title":"Discovering social learning ecosystems during clinical clerkship from United States medical students’ feedback encounters: a content analysis.","authors":"Anna Therese Cianciolo, Heeyoung Han, Lydia Anne Howes, Debra Lee Klamen, Sophia Matos","doi":"10.3352/jeehp.2024.21.5","DOIUrl":"10.3352/jeehp.2024.21.5","url":null,"abstract":"<p><strong>Purpose: </strong>We examined United States medical students’ self-reported feedback encounters during clerkship training to better understand in situ feedback practices. Specifically, we asked: Who do students receive feedback from, about what, when, where, and how do they use it? We explored whether curricular expectations for preceptors’ written commentary aligned with feedback as it occurs naturalistically in the workplace.</p><p><strong>Methods: </strong>This study occurred from July 2021 to February 2022 at Southern Illinois University School of Medicine. We used qualitative survey-based experience sampling to gather students’ accounts of their feedback encounters in 8 core specialties. We analyzed the who, what, when, where, and why of 267 feedback encounters reported by 11 clerkship students over 30 weeks. Code frequencies were mapped qualitatively to explore patterns in feedback encounters.</p><p><strong>Results: </strong>Clerkship feedback occurs in patterns apparently related to the nature of clinical work in each specialty. These patterns may be attributable to each specialty’s “social learning ecosystem”—the distinctive learning environment shaped by the social and material aspects of a given specialty’s work, which determine who preceptors are, what students do with preceptors, and what skills or attributes matter enough to preceptors to comment on.</p><p><strong>Conclusion: </strong>Comprehensive, standardized expectations for written feedback across specialties conflict with the reality of workplace-based learning. Preceptors may be better able—and more motivated—to document student performance that occurs as a natural part of everyday work. Nurturing social learning ecosystems could facilitate workplace-based learning such that, across specialties, students acquire a comprehensive clinical skillset appropriate for graduation.</p>","PeriodicalId":46098,"journal":{"name":"Journal of Educational Evaluation for Health Professions","volume":"21 ","pages":"5"},"PeriodicalIF":4.4,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10948917/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139984162","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Comparison of virtual and in-person simulations for sepsis and trauma resuscitation training in Singapore: a randomized controlled trial 新加坡脓毒症和创伤复苏培训中虚拟和现场模拟的比较:随机对照试验。
IF 9.3 Q1 EDUCATION, SCIENTIFIC DISCIPLINES Pub Date : 2024-01-01 Epub Date: 2024-11-18 DOI: 10.3352/jeehp.2024.21.33
Matthew Jian Wen Low, Gene Wai Han Chan, Zisheng Li, Yiwen Koh, Chi Loong Jen, Zi Yao Lee, Lenard Tai Win Cheng

Purpose: This study aimed to compare cognitive, non-cognitive, and overall learning outcomes for sepsis and trauma resuscitation skills in novices with virtual patient simulation (VPS) versus in-person simulation (IPS).

Methods: A randomized controlled trial was conducted on junior doctors in 1 emergency department from January to December 2022, comparing 70 minutes of VPS (n=19) versus IPS (n=21) in sepsis and trauma resuscitation. Using the nominal group technique, we created skills assessment checklists and determined Bloom’s taxonomy domains for each checklist item. Two blinded raters observed participants leading 1 sepsis and 1 trauma resuscitation simulation. Satisfaction was measured using the Student Satisfaction with Learning Scale (SSLS). The SSLS and checklist scores were analyzed using the Wilcoxon rank sum test and 2-tailed t-test respectively.

Results: For sepsis, there was no significant difference between VPS and IPS in overall scores (2.0; 95% confidence interval [CI], -1.4 to 5.4; Cohen’s d=0.38), as well as in items that were cognitive (1.1; 95% CI, -1.5 to 3.7) and not only cognitive (0.9; 95% CI, -0.4 to 2.2). Likewise, for trauma, there was no significant difference in overall scores (-0.9; 95% CI, -4.1 to 2.3; Cohen’s d=0.19), as well as in items that were cognitive (-0.3; 95% CI, -2.8 to 2.1) and not only cognitive (-0.6; 95% CI, -2.4 to 1.3). The median SSLS scores were lower with VPS than with IPS (-3.0; 95% CI, -1.0 to -5.0).

Conclusion: For novices, there were no major differences in overall and non-cognitive learning outcomes for sepsis and trauma resuscitation between VPS and IPS. Learners were more satisfied with IPS than with VPS (clinicaltrials.gov identifier: NCT05201950).

目的:本研究旨在比较虚拟患者模拟(VPS)与现场模拟(IPS)对新手败血症和创伤复苏技能的认知、非认知和总体学习效果:2022 年 1 月至 12 月期间,我们对急诊科的初级医生进行了一项随机对照试验,比较了 70 分钟 VPS(19 人)与 IPS(21 人)在败血症和创伤复苏方面的效果。我们使用名义小组技术创建了技能评估核对表,并确定了每个核对表项目的布卢姆分类学领域。两名双盲评分员观察了学员在败血症和创伤复苏模拟教学中的表现。满意度采用学生学习满意度量表(SSLS)进行测量。采用双尾 t 检验对 SSLS 和核对表得分进行分析:对于败血症,VPS 和 IPS 在总分(2.0;95% 置信区间[CI],-1.4 至 5.4;Cohen's d=0.38)以及认知项目(1.1;95% 置信区间,-1.5 至 3.7)和非认知项目(0.9;95% 置信区间,-0.4 至 2.2)方面均无显著差异。同样,在创伤方面,总分(-0.9;95% CI,-4.1 至 2.3;Cohen's d=0.19)以及认知项目(-0.3;95% CI,-2.8 至 2.1)和非认知项目(-0.6;95% CI,-2.4 至 1.3)均无显著差异。VPS 的 SSLS 中位数得分低于 IPS(-3.0;95% CI,-1.0 至-5.0):对于新手而言,VPS 和 IPS 在败血症和创伤复苏的总体和非认知学习成果方面没有重大差异。学员对 IPS 的满意度高于 VPS(clinicaltrials.gov identifier: NCT05201950)。
{"title":"Comparison of virtual and in-person simulations for sepsis and trauma resuscitation training in Singapore: a randomized controlled trial","authors":"Matthew Jian Wen Low, Gene Wai Han Chan, Zisheng Li, Yiwen Koh, Chi Loong Jen, Zi Yao Lee, Lenard Tai Win Cheng","doi":"10.3352/jeehp.2024.21.33","DOIUrl":"10.3352/jeehp.2024.21.33","url":null,"abstract":"<p><strong>Purpose: </strong>This study aimed to compare cognitive, non-cognitive, and overall learning outcomes for sepsis and trauma resuscitation skills in novices with virtual patient simulation (VPS) versus in-person simulation (IPS).</p><p><strong>Methods: </strong>A randomized controlled trial was conducted on junior doctors in 1 emergency department from January to December 2022, comparing 70 minutes of VPS (n=19) versus IPS (n=21) in sepsis and trauma resuscitation. Using the nominal group technique, we created skills assessment checklists and determined Bloom’s taxonomy domains for each checklist item. Two blinded raters observed participants leading 1 sepsis and 1 trauma resuscitation simulation. Satisfaction was measured using the Student Satisfaction with Learning Scale (SSLS). The SSLS and checklist scores were analyzed using the Wilcoxon rank sum test and 2-tailed t-test respectively.</p><p><strong>Results: </strong>For sepsis, there was no significant difference between VPS and IPS in overall scores (2.0; 95% confidence interval [CI], -1.4 to 5.4; Cohen’s d=0.38), as well as in items that were cognitive (1.1; 95% CI, -1.5 to 3.7) and not only cognitive (0.9; 95% CI, -0.4 to 2.2). Likewise, for trauma, there was no significant difference in overall scores (-0.9; 95% CI, -4.1 to 2.3; Cohen’s d=0.19), as well as in items that were cognitive (-0.3; 95% CI, -2.8 to 2.1) and not only cognitive (-0.6; 95% CI, -2.4 to 1.3). The median SSLS scores were lower with VPS than with IPS (-3.0; 95% CI, -1.0 to -5.0).</p><p><strong>Conclusion: </strong>For novices, there were no major differences in overall and non-cognitive learning outcomes for sepsis and trauma resuscitation between VPS and IPS. Learners were more satisfied with IPS than with VPS (clinicaltrials.gov identifier: NCT05201950).</p>","PeriodicalId":46098,"journal":{"name":"Journal of Educational Evaluation for Health Professions","volume":"21 ","pages":"33"},"PeriodicalIF":9.3,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11647267/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142648693","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Opportunities, challenges, and future directions of large language models, including ChatGPT in medical education: a systematic scoping review 包括 ChatGPT 在内的大型语言模型在医学教育中的机遇、挑战和未来发展方向:系统性范围审查。
IF 9.3 Q1 EDUCATION, SCIENTIFIC DISCIPLINES Pub Date : 2024-01-01 Epub Date: 2024-03-15 DOI: 10.3352/jeehp.2024.21.6
Xiaojun Xu, Yixiao Chen, Jing Miao

Background: ChatGPT is a large language model (LLM) based on artificial intelligence (AI) capable of responding in multiple languages and generating nuanced and highly complex responses. While ChatGPT holds promising applications in medical education, its limitations and potential risks cannot be ignored.

Methods: A scoping review was conducted for English articles discussing ChatGPT in the context of medical education published after 2022. A literature search was performed using PubMed/MEDLINE, Embase, and Web of Science databases, and information was extracted from the relevant studies that were ultimately included.

Results: ChatGPT exhibits various potential applications in medical education, such as providing personalized learning plans and materials, creating clinical practice simulation scenarios, and assisting in writing articles. However, challenges associated with academic integrity, data accuracy, and potential harm to learning were also highlighted in the literature. The paper emphasizes certain recommendations for using ChatGPT, including the establishment of guidelines. Based on the review, 3 key research areas were proposed: cultivating the ability of medical students to use ChatGPT correctly, integrating ChatGPT into teaching activities and processes, and proposing standards for the use of AI by medical students.

Conclusion: ChatGPT has the potential to transform medical education, but careful consideration is required for its full integration. To harness the full potential of ChatGPT in medical education, attention should not only be given to the capabilities of AI but also to its impact on students and teachers.

背景介绍ChatGPT 是一种基于人工智能 (AI) 的大型语言模型 (LLM),能够以多种语言进行应答,并生成细致入微、高度复杂的应答。虽然 ChatGPT 在医学教育中的应用前景广阔,但其局限性和潜在风险也不容忽视:方法:我们对 2022 年后发表的讨论医学教育背景下 ChatGPT 的英文文章进行了一次范围审查。我们使用 PubMed/MEDLINE、Embase 和 Web of Science 数据库进行了文献检索,并从最终纳入的相关研究中提取了信息:ChatGPT在医学教育中具有多种潜在应用,如提供个性化学习计划和材料、创建临床实践模拟场景以及协助撰写文章。然而,文献中也强调了与学术诚信、数据准确性和对学习的潜在危害相关的挑战。本文强调了使用 ChatGPT 的一些建议,包括制定指导方针。在综述的基础上,提出了 3 个重点研究领域:培养医学生正确使用 ChatGPT 的能力、将 ChatGPT 融入教学活动和过程、提出医学生使用人工智能的标准:结论:ChatGPT 具有改变医学教育的潜力,但在全面整合时需要慎重考虑。要在医学教育中充分发挥 ChatGPT 的潜力,不仅要关注人工智能的能力,还要关注其对学生和教师的影响。
{"title":"Opportunities, challenges, and future directions of large language models, including ChatGPT in medical education: a systematic scoping review","authors":"Xiaojun Xu, Yixiao Chen, Jing Miao","doi":"10.3352/jeehp.2024.21.6","DOIUrl":"10.3352/jeehp.2024.21.6","url":null,"abstract":"<p><strong>Background: </strong>ChatGPT is a large language model (LLM) based on artificial intelligence (AI) capable of responding in multiple languages and generating nuanced and highly complex responses. While ChatGPT holds promising applications in medical education, its limitations and potential risks cannot be ignored.</p><p><strong>Methods: </strong>A scoping review was conducted for English articles discussing ChatGPT in the context of medical education published after 2022. A literature search was performed using PubMed/MEDLINE, Embase, and Web of Science databases, and information was extracted from the relevant studies that were ultimately included.</p><p><strong>Results: </strong>ChatGPT exhibits various potential applications in medical education, such as providing personalized learning plans and materials, creating clinical practice simulation scenarios, and assisting in writing articles. However, challenges associated with academic integrity, data accuracy, and potential harm to learning were also highlighted in the literature. The paper emphasizes certain recommendations for using ChatGPT, including the establishment of guidelines. Based on the review, 3 key research areas were proposed: cultivating the ability of medical students to use ChatGPT correctly, integrating ChatGPT into teaching activities and processes, and proposing standards for the use of AI by medical students.</p><p><strong>Conclusion: </strong>ChatGPT has the potential to transform medical education, but careful consideration is required for its full integration. To harness the full potential of ChatGPT in medical education, attention should not only be given to the capabilities of AI but also to its impact on students and teachers.</p>","PeriodicalId":46098,"journal":{"name":"Journal of Educational Evaluation for Health Professions","volume":"21 ","pages":"6"},"PeriodicalIF":9.3,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11035906/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140132845","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Presidential address 2024: the expansion of computer-based testing to numerous health professions licensing examinations in Korea, preparation of computer-based practical tests, and adoption of the medical metaverse. 2024 年总统演讲:将计算机辅助考试扩展到韩国众多卫生专业执照考试中,准备计算机辅助实践考试,并采用医学元网。
IF 4.4 Q1 EDUCATION, SCIENTIFIC DISCIPLINES Pub Date : 2024-01-01 Epub Date: 2024-02-20 DOI: 10.3352/jeehp.2024.21.2
Hyunjoo Pai
{"title":"Presidential address 2024: the expansion of computer-based testing to numerous health professions licensing examinations in Korea, preparation of computer-based practical tests, and adoption of the medical metaverse.","authors":"Hyunjoo Pai","doi":"10.3352/jeehp.2024.21.2","DOIUrl":"10.3352/jeehp.2024.21.2","url":null,"abstract":"","PeriodicalId":46098,"journal":{"name":"Journal of Educational Evaluation for Health Professions","volume":"21 ","pages":"2"},"PeriodicalIF":4.4,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10948918/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139906639","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Development and validity evidence for the resident-led large group teaching assessment instrument in the United States: a methodological study. 美国住院医师主导的大型小组教学评估工具的开发和有效性证据:一项方法研究。
IF 4.4 Q1 EDUCATION, SCIENTIFIC DISCIPLINES Pub Date : 2024-01-01 Epub Date: 2024-02-23 DOI: 10.3352/jeehp.2024.21.3
Ariel Shana Frey-Vogel, Kristina Dzara, Kimberly Anne Gifford, Yoon Soo Park, Justin Berk, Allison Heinly, Darcy Wolcott, Daniel Adam Hall, Shannon Elliott Scott-Vernaglia, Katherine Anne Sparger, Erica Ye-Pyng Chung

Purpose: Despite educational mandates to assess resident teaching competence, limited instruments with validity evidence exist for this purpose. Existing instruments do not allow faculty to assess resident-led teaching in a large group format or whether teaching was interactive. This study gathers validity evidence on the use of the Resident-led Large Group Teaching Assessment Instrument (Relate), an instrument used by faculty to assess resident teaching competency. Relate comprises 23 behaviors divided into six elements: learning environment, goals and objectives, content of talk, promotion of understanding and retention, session management, and closure.

Methods: Messick's unified validity framework was used for this study. Investigators used video recordings of resident-led teaching from three pediatric residency programs to develop Relate and a rater guidebook. Faculty were trained on instrument use through frame-of-reference training. Resident teaching at all sites was video-recorded during 2018-2019. Two trained faculty raters assessed each video. Descriptive statistics on performance were obtained. Validity evidence sources include: rater training effect (response process), reliability and variability (internal structure), and impact on Milestones assessment (relations to other variables).

Results: Forty-eight videos, from 16 residents, were analyzed. Rater training improved inter-rater reliability from 0.04 to 0.64. The Φ-coefficient reliability was 0.50. There was a significant correlation between overall Relate performance and the pediatric teaching Milestone, r = 0.34, P = .019.

Conclusion: Relate provides validity evidence with sufficient reliability to measure resident-led large-group teaching competence.

目的:尽管教育部门要求对住院医师的教学能力进行评估,但能证明其有效性的评估工具却十分有限。现有的工具不允许教员以大组形式评估住院医师主导的教学,也不允许评估教学是否具有互动性。本研究收集了 "住院医师主导的大型小组教学评估工具"(Relate)的有效性证据,该工具用于评估住院医师的教学能力。Relate 包括 23 个行为,分为六个要素:学习环境、目标和目的、谈话内容、促进理解和保持、会议管理和结束:本研究采用梅西克的统一有效性框架。研究人员使用了三个儿科住院医师培训项目的住院医师教学录像来开发Relate和评分指导手册。通过参照系培训,对教师进行了仪器使用培训。在 2018-2019 年期间,对所有地点的住院医师教学进行了视频录制。两名经过培训的教师评分员对每段视频进行评估。获得了有关表现的描述性统计数据。有效性证据来源包括:评分者培训效果(反应过程)、可靠性和可变性(内部结构)以及对里程碑评估的影响(与其他变量的关系):对来自 16 名住院医师的 48 个视频进行了分析。评分者培训将评分者之间的可靠性从 0.04 提高到 0.64。Φ 系数可靠性为 0.50。Relate的总体表现与儿科教学里程碑之间存在明显的相关性,r = 0.34,P = .019:Relate为衡量住院医师主导的大组教学能力提供了具有充分可靠性的有效性证据。
{"title":"Development and validity evidence for the resident-led large group teaching assessment instrument in the United States: a methodological study.","authors":"Ariel Shana Frey-Vogel, Kristina Dzara, Kimberly Anne Gifford, Yoon Soo Park, Justin Berk, Allison Heinly, Darcy Wolcott, Daniel Adam Hall, Shannon Elliott Scott-Vernaglia, Katherine Anne Sparger, Erica Ye-Pyng Chung","doi":"10.3352/jeehp.2024.21.3","DOIUrl":"10.3352/jeehp.2024.21.3","url":null,"abstract":"<p><strong>Purpose: </strong>Despite educational mandates to assess resident teaching competence, limited instruments with validity evidence exist for this purpose. Existing instruments do not allow faculty to assess resident-led teaching in a large group format or whether teaching was interactive. This study gathers validity evidence on the use of the Resident-led Large Group Teaching Assessment Instrument (Relate), an instrument used by faculty to assess resident teaching competency. Relate comprises 23 behaviors divided into six elements: learning environment, goals and objectives, content of talk, promotion of understanding and retention, session management, and closure.</p><p><strong>Methods: </strong>Messick's unified validity framework was used for this study. Investigators used video recordings of resident-led teaching from three pediatric residency programs to develop Relate and a rater guidebook. Faculty were trained on instrument use through frame-of-reference training. Resident teaching at all sites was video-recorded during 2018-2019. Two trained faculty raters assessed each video. Descriptive statistics on performance were obtained. Validity evidence sources include: rater training effect (response process), reliability and variability (internal structure), and impact on Milestones assessment (relations to other variables).</p><p><strong>Results: </strong>Forty-eight videos, from 16 residents, were analyzed. Rater training improved inter-rater reliability from 0.04 to 0.64. The Φ-coefficient reliability was 0.50. There was a significant correlation between overall Relate performance and the pediatric teaching Milestone, r = 0.34, P = .019.</p><p><strong>Conclusion: </strong>Relate provides validity evidence with sufficient reliability to measure resident-led large-group teaching competence.</p>","PeriodicalId":46098,"journal":{"name":"Journal of Educational Evaluation for Health Professions","volume":"21 ","pages":"3"},"PeriodicalIF":4.4,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10948941/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139933504","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ChatGPT (GPT-4) passed the Japanese National License Examination for Pharmacists in 2022, answering all items including those with diagrams: a descriptive study. ChatGPT (GPT-4) 在 2022 年通过了日本全国药剂师执照考试,回答了包括图表在内的所有题目:一项描述性研究。
IF 4.4 Q1 EDUCATION, SCIENTIFIC DISCIPLINES Pub Date : 2024-01-01 Epub Date: 2024-02-28 DOI: 10.3352/jeehp.2024.21.4
Hiroyasu Sato, Katsuhiko Ogasawara

Purpose: The objective of this study was to assess the performance of ChatGPT (GPT-4) on all items, including those with diagrams, in the Japanese National License Examination for Pharmacists (JNLEP) and compare it with the previous GPT-3.5 model’s performance.

Methods: The 107th JNLEP, conducted in 2022, with 344 items input into the GPT-4 model, was targeted for this study. Separately, 284 items, excluding those with diagrams, were entered into the GPT-3.5 model. The answers were categorized and analyzed to determine accuracy rates based on categories, subjects, and presence or absence of diagrams. The accuracy rates were compared to the main passing criteria (overall accuracy rate ≥62.9%).

Results: The overall accuracy rate for all items in the 107th JNLEP in GPT-4 was 72.5%, successfully meeting all the passing criteria. For the set of items without diagrams, the accuracy rate was 80.0%, which was significantly higher than that of the GPT-3.5 model (43.5%). The GPT-4 model demonstrated an accuracy rate of 36.1% for items that included diagrams.

Conclusion: Advancements that allow GPT-4 to process images have made it possible for LLMs to answer all items in medical-related license examinations. This study’s findings confirm that ChatGPT (GPT-4) possesses sufficient knowledge to meet the passing criteria.

目的:本研究旨在评估 ChatGPT(GPT-4)在日本国家执业药师资格考试(JNLEP)中所有项目(包括带图表的项目)上的表现,并将其与之前的 GPT-3.5 模型的表现进行比较:本研究以 2022 年举行的第 107 届日本国家执业药师资格考试(JNLEP)为对象,将 344 个项目输入 GPT-4 模型。另外,在 GPT-3.5 模型中输入了 284 个项目(不包括有图表的项目)。对答案进行了分类和分析,以确定基于类别、主题和有无图表的准确率。准确率与主要及格标准(总准确率≥62.9%)进行了比较:第107届日本语能力考试GPT-4中所有题目的总正确率为72.5%,成功地达到了所有及格标准。对于无图表的项目集,正确率为 80.0%,明显高于 GPT-3.5 模型(43.5%)。对于包含图表的项目,GPT-4 模型的准确率为 36.1%:结论:GPT-4 在处理图像方面的进步使法律硕士有可能回答医学相关执照考试中的所有题目。本研究的结果证实,ChatGPT(GPT-4)拥有足够的知识来满足合格标准。
{"title":"ChatGPT (GPT-4) passed the Japanese National License Examination for Pharmacists in 2022, answering all items including those with diagrams: a descriptive study.","authors":"Hiroyasu Sato, Katsuhiko Ogasawara","doi":"10.3352/jeehp.2024.21.4","DOIUrl":"10.3352/jeehp.2024.21.4","url":null,"abstract":"<p><strong>Purpose: </strong>The objective of this study was to assess the performance of ChatGPT (GPT-4) on all items, including those with diagrams, in the Japanese National License Examination for Pharmacists (JNLEP) and compare it with the previous GPT-3.5 model’s performance.</p><p><strong>Methods: </strong>The 107th JNLEP, conducted in 2022, with 344 items input into the GPT-4 model, was targeted for this study. Separately, 284 items, excluding those with diagrams, were entered into the GPT-3.5 model. The answers were categorized and analyzed to determine accuracy rates based on categories, subjects, and presence or absence of diagrams. The accuracy rates were compared to the main passing criteria (overall accuracy rate ≥62.9%).</p><p><strong>Results: </strong>The overall accuracy rate for all items in the 107th JNLEP in GPT-4 was 72.5%, successfully meeting all the passing criteria. For the set of items without diagrams, the accuracy rate was 80.0%, which was significantly higher than that of the GPT-3.5 model (43.5%). The GPT-4 model demonstrated an accuracy rate of 36.1% for items that included diagrams.</p><p><strong>Conclusion: </strong>Advancements that allow GPT-4 to process images have made it possible for LLMs to answer all items in medical-related license examinations. This study’s findings confirm that ChatGPT (GPT-4) possesses sufficient knowledge to meet the passing criteria.</p>","PeriodicalId":46098,"journal":{"name":"Journal of Educational Evaluation for Health Professions","volume":"21 ","pages":"4"},"PeriodicalIF":4.4,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10948916/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139984149","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Events related to medication errors and related factors involving nurses’ behavior to reduce medication errors in Japan: a Bayesian network modeling-based factor analysis and scenario analysis. 日本与用药错误有关的事件以及涉及护士减少用药错误行为的相关因素:基于贝叶斯网络建模的因素分析和情景分析。
IF 9.3 Q1 EDUCATION, SCIENTIFIC DISCIPLINES Pub Date : 2024-01-01 Epub Date: 2024-06-11 DOI: 10.3352/jeehp.2024.21.12
Naotaka Sugimura, Katsuhiko Ogasawara

Purpose: This study aimed to identify the relationships between medication errors and the factors affecting nurses’ knowledge and behavior in Japan using Bayesian network modeling. It also aimed to identify important factors through scenario analysis with consideration of nursing students’ and nurses’ education regarding patient safety and medications.

Methods: We used mixed methods. First, error events related to medications and related factors were qualitatively extracted from 119 actual incident reports in 2022 from the database of the Japan Council for Quality Health Care. These events and factors were then quantitatively evaluated in a flow model using Bayesian network, and a scenario analysis was conducted to estimate the posterior probabilities of events when the prior probabilities of some factors were 0%.

Results: There were 10 types of events related to medication errors. A 5-layer flow model was created using Bayesian network analysis. The scenario analysis revealed that “failure to confirm the 5 rights,” “unfamiliarity with operations of medications,” “insufficient knowledge of medications,” and “assumptions and forgetfulness” were factors that were significantly associated with the occurrence of medical errors.

Conclusion: This study provided an estimate of the effects of mitigating nurses’ behavioral factors that trigger medication errors. The flow model itself can also be used as an educational tool to reflect on behavior when incidents occur. It is expected that patient safety education will be recognized as a major element of nursing education worldwide and that an integrated curriculum will be developed.

目的:本研究旨在利用贝叶斯网络模型确定日本用药错误与影响护士知识和行为的因素之间的关系。研究还旨在通过情景分析确定重要因素,同时考虑到护理专业学生和护士在患者安全和用药方面的教育:我们采用了混合方法。首先,我们从日本医疗质量委员会数据库中的 119 份 2022 年实际事故报告中定性提取了与用药有关的错误事件及相关因素。然后,利用贝叶斯网络在流量模型中对这些事件和因素进行了定量评估,并进行了情景分析,以估计当某些因素的先验概率为 0% 时事件的后验概率:结果:与用药错误相关的事件共有 10 种。利用贝叶斯网络分析建立了一个五层流程模型。情景分析显示,"未确认 5 项权利"、"不熟悉药物操作"、"药物知识不足 "和 "假设和遗忘 "是与医疗差错发生显著相关的因素:本研究提供了对减轻引发用药错误的护士行为因素的影响的估计。流程模型本身也可作为一种教育工具,在发生事故时对行为进行反思。预计患者安全教育将被视为全球护理教育的一个主要内容,并将开发出一套综合课程。
{"title":"Events related to medication errors and related factors involving nurses’ behavior to reduce medication errors in Japan: a Bayesian network modeling-based factor analysis and scenario analysis.","authors":"Naotaka Sugimura, Katsuhiko Ogasawara","doi":"10.3352/jeehp.2024.21.12","DOIUrl":"10.3352/jeehp.2024.21.12","url":null,"abstract":"<p><strong>Purpose: </strong>This study aimed to identify the relationships between medication errors and the factors affecting nurses’ knowledge and behavior in Japan using Bayesian network modeling. It also aimed to identify important factors through scenario analysis with consideration of nursing students’ and nurses’ education regarding patient safety and medications.</p><p><strong>Methods: </strong>We used mixed methods. First, error events related to medications and related factors were qualitatively extracted from 119 actual incident reports in 2022 from the database of the Japan Council for Quality Health Care. These events and factors were then quantitatively evaluated in a flow model using Bayesian network, and a scenario analysis was conducted to estimate the posterior probabilities of events when the prior probabilities of some factors were 0%.</p><p><strong>Results: </strong>There were 10 types of events related to medication errors. A 5-layer flow model was created using Bayesian network analysis. The scenario analysis revealed that “failure to confirm the 5 rights,” “unfamiliarity with operations of medications,” “insufficient knowledge of medications,” and “assumptions and forgetfulness” were factors that were significantly associated with the occurrence of medical errors.</p><p><strong>Conclusion: </strong>This study provided an estimate of the effects of mitigating nurses’ behavioral factors that trigger medication errors. The flow model itself can also be used as an educational tool to reflect on behavior when incidents occur. It is expected that patient safety education will be recognized as a major element of nursing education worldwide and that an integrated curriculum will be developed.</p>","PeriodicalId":46098,"journal":{"name":"Journal of Educational Evaluation for Health Professions","volume":"21 ","pages":"12"},"PeriodicalIF":9.3,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11223988/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141301850","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Journal of Educational Evaluation for Health Professions
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1