Pub Date : 2025-12-12DOI: 10.1097/SIH.0000000000000904
Amy F Hildreth, Elizabeth Pearce, Sherri L Rudinsky, Cynthia S Shen, Rebekah Cole
Introduction: Peer role-playing, in which medical students alternate between provider and patient roles, is a core component of peer-assisted learning. While the educational value of playing the provider is well established, the extent to which students gain medical knowledge through acting as patients remains unclear.
Methods: In this quantitative study with qualitative components, 178 first-year medical students portrayed patients during a high-fidelity prehospital simulation. Medical knowledge was assessed with a 21-item multiple-choice test after simulation (162 responses; 91.0% response rate). An open-ended reflection prompt captured students' perceived learning. Chi-square analyses compared knowledge performance between students who portrayed a given scenario ("Actors") and those who did not ("nonactors"). Qualitative data were analyzed using reflexive thematic analysis.
Results: Quantitative analysis revealed no statistically significant differences in performance between actors and nonactors across test items (P = 0.17-0.99). However, 160 students (89.9%) reported perceived gains in medical knowledge. Thematic analysis identified 3 primary learning mechanisms: observational learning, experiential learning, and direct instruction.
Conclusions: Although knowledge gains specific to patient roles were not captured through multiple-choice testing, students perceived substantial learning through peer role-play. The student-as-patient role may be intentionally designed to support cognitive as well as affective learning in simulation-based medical education.
{"title":"Do Students Learn From Playing the Patient? A Study of Peer Role-Play in Prehospital Simulation.","authors":"Amy F Hildreth, Elizabeth Pearce, Sherri L Rudinsky, Cynthia S Shen, Rebekah Cole","doi":"10.1097/SIH.0000000000000904","DOIUrl":"https://doi.org/10.1097/SIH.0000000000000904","url":null,"abstract":"<p><strong>Introduction: </strong>Peer role-playing, in which medical students alternate between provider and patient roles, is a core component of peer-assisted learning. While the educational value of playing the provider is well established, the extent to which students gain medical knowledge through acting as patients remains unclear.</p><p><strong>Methods: </strong>In this quantitative study with qualitative components, 178 first-year medical students portrayed patients during a high-fidelity prehospital simulation. Medical knowledge was assessed with a 21-item multiple-choice test after simulation (162 responses; 91.0% response rate). An open-ended reflection prompt captured students' perceived learning. Chi-square analyses compared knowledge performance between students who portrayed a given scenario (\"Actors\") and those who did not (\"nonactors\"). Qualitative data were analyzed using reflexive thematic analysis.</p><p><strong>Results: </strong>Quantitative analysis revealed no statistically significant differences in performance between actors and nonactors across test items (P = 0.17-0.99). However, 160 students (89.9%) reported perceived gains in medical knowledge. Thematic analysis identified 3 primary learning mechanisms: observational learning, experiential learning, and direct instruction.</p><p><strong>Conclusions: </strong>Although knowledge gains specific to patient roles were not captured through multiple-choice testing, students perceived substantial learning through peer role-play. The student-as-patient role may be intentionally designed to support cognitive as well as affective learning in simulation-based medical education.</p>","PeriodicalId":49517,"journal":{"name":"Simulation in Healthcare-Journal of the Society for Simulation in Healthcare","volume":" ","pages":""},"PeriodicalIF":2.1,"publicationDate":"2025-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145745466","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-05DOI: 10.1097/SIH.0000000000000898
Catherine Patocka, Ingrid Anderson, Erin Brennan, Lauren Lacroix, Anjali Pandya, Heather Ganshorn, Andrew K Hall
Summary statement: Spaced learning is increasingly used in simulation-based education, yet its impact on learning, performance, and patient outcomes is unclear. We compared spaced training (several discrete sessions) with massed training (a single session) for skills acquisition in health professionals. We systematically reviewed randomized or prospective comparative studies. Of 4572 citations screened, 15 met inclusion criteria. Studies covered resuscitation and surgical procedures, most with spacing intervals of about 1 week. Despite heterogeneity in study design, participants, and outcomes, spaced training was generally as effective as massed training. Some evidence suggested advantages for spaced training in skill retention, particularly for time to complete procedures. Findings were inconsistent across other outcomes. No studies demonstrated improvements in patient care practices, patient outcomes, or broader educational effects. These results suggest spaced simulation may offer retention benefits for certain skills, but more research is needed to assess its impact on clinical and system-level outcomes.
{"title":"The Impact of Simulation-Based Spaced Training for Skills Acquisition on Learning and Performance Outcomes Among Healthcare Professionals: A Systematic Review.","authors":"Catherine Patocka, Ingrid Anderson, Erin Brennan, Lauren Lacroix, Anjali Pandya, Heather Ganshorn, Andrew K Hall","doi":"10.1097/SIH.0000000000000898","DOIUrl":"https://doi.org/10.1097/SIH.0000000000000898","url":null,"abstract":"<p><strong>Summary statement: </strong>Spaced learning is increasingly used in simulation-based education, yet its impact on learning, performance, and patient outcomes is unclear. We compared spaced training (several discrete sessions) with massed training (a single session) for skills acquisition in health professionals. We systematically reviewed randomized or prospective comparative studies. Of 4572 citations screened, 15 met inclusion criteria. Studies covered resuscitation and surgical procedures, most with spacing intervals of about 1 week. Despite heterogeneity in study design, participants, and outcomes, spaced training was generally as effective as massed training. Some evidence suggested advantages for spaced training in skill retention, particularly for time to complete procedures. Findings were inconsistent across other outcomes. No studies demonstrated improvements in patient care practices, patient outcomes, or broader educational effects. These results suggest spaced simulation may offer retention benefits for certain skills, but more research is needed to assess its impact on clinical and system-level outcomes.</p>","PeriodicalId":49517,"journal":{"name":"Simulation in Healthcare-Journal of the Society for Simulation in Healthcare","volume":" ","pages":""},"PeriodicalIF":2.1,"publicationDate":"2025-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145679220","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-01Epub Date: 2025-08-07DOI: 10.1097/SIH.0000000000000877
Adam Cheng, Vikhashni Nagesh, Susan Eller, Vincent Grant, Yiqun Lin
Introduction: Large language model-based generative AI tools, such as the Chat Generative Pre-trained Transformer (ChatGPT) platform, have been used to assist with writing academic manuscripts. Little is known about ChatGPT's ability to accurately cite relevant references in health care simulation-related scholarly manuscripts. In this study, we sought to: (1) determine the reference accuracy and citation relevance among health care simulation debriefing articles generated by 2 different models of ChatGPT and (2) determine if ChatGPT models can be trained with specific prompts to improve reference accuracy and citation relevance.
Methods: The ChatGPT-4 and ChatGPT o1 models were asked to generate scholarly articles with appropriate references based upon three different article titles about health care simulation debriefing. Five articles with references were generated for each article title-3 ChatGPT-4 training conditions and 2 ChatGPT o1 training conditions. Each article was assessed independently by 2 blinded reviewers for reference accuracy and citation relevance.
Results: Fifteen articles were generated in total: 9 articles by ChatGPT-4 and 6 articles by ChatGPT o1. A total of 60.4% of the 303 references generated across 5 training conditions were classified as accurate, with no significant difference in reference accuracy between the 5 conditions. A total of 22.2% of the 451 citations were classified as highly relevant, with no significant difference in citation relevance across the 5 conditions.
Conclusions: Among debriefing articles generated by ChatGPT-4 and ChatGPT o1, both ChatGPT models are unreliable with respect to reference accuracy and citation relevance. Reference accuracy and citation relevance for debriefing articles do not improve even with some degree of training built into ChatGPT prompts.
{"title":"Exploring AI Hallucinations of ChatGPT: Reference Accuracy and Citation Relevance of ChatGPT Models and Training Conditions.","authors":"Adam Cheng, Vikhashni Nagesh, Susan Eller, Vincent Grant, Yiqun Lin","doi":"10.1097/SIH.0000000000000877","DOIUrl":"10.1097/SIH.0000000000000877","url":null,"abstract":"<p><strong>Introduction: </strong>Large language model-based generative AI tools, such as the Chat Generative Pre-trained Transformer (ChatGPT) platform, have been used to assist with writing academic manuscripts. Little is known about ChatGPT's ability to accurately cite relevant references in health care simulation-related scholarly manuscripts. In this study, we sought to: (1) determine the reference accuracy and citation relevance among health care simulation debriefing articles generated by 2 different models of ChatGPT and (2) determine if ChatGPT models can be trained with specific prompts to improve reference accuracy and citation relevance.</p><p><strong>Methods: </strong>The ChatGPT-4 and ChatGPT o1 models were asked to generate scholarly articles with appropriate references based upon three different article titles about health care simulation debriefing. Five articles with references were generated for each article title-3 ChatGPT-4 training conditions and 2 ChatGPT o1 training conditions. Each article was assessed independently by 2 blinded reviewers for reference accuracy and citation relevance.</p><p><strong>Results: </strong>Fifteen articles were generated in total: 9 articles by ChatGPT-4 and 6 articles by ChatGPT o1. A total of 60.4% of the 303 references generated across 5 training conditions were classified as accurate, with no significant difference in reference accuracy between the 5 conditions. A total of 22.2% of the 451 citations were classified as highly relevant, with no significant difference in citation relevance across the 5 conditions.</p><p><strong>Conclusions: </strong>Among debriefing articles generated by ChatGPT-4 and ChatGPT o1, both ChatGPT models are unreliable with respect to reference accuracy and citation relevance. Reference accuracy and citation relevance for debriefing articles do not improve even with some degree of training built into ChatGPT prompts.</p>","PeriodicalId":49517,"journal":{"name":"Simulation in Healthcare-Journal of the Society for Simulation in Healthcare","volume":" ","pages":"413-418"},"PeriodicalIF":2.1,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144795978","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-01Epub Date: 2025-06-23DOI: 10.1097/SIH.0000000000000866
Ellen L Duncan, Joanne M Agnant, Selin T Sagalowsky
Background: Families overwhelmingly want to be present during pediatric resuscitations, and their presence offers myriad benefits. However, there is little evidence on how to teach and assess key patient- and family-centered communication behaviors. Our objective was to apply a modified Delphi methodology to develop and refine a simulation-based assessment tool focusing on crucial behaviors for healthcare providers providing emotional support to patients and families during pediatric medical resuscitations.
Methods: We identified 4 behavioral domains and 14 subdomains through a literature review, focus groups with our institution's Family and Youth Advisory Councils, and adaptation of existing simulation-based communication assessment tools. A panel of 9 national experts conducted rounds of iterative revision and rating of candidate behaviors for inclusion, and we calculated mean approval ratings (1 = Do not include; 2 = Include with modifications; 3 = Include as is) for each subdomain.
Results: Experts engaged in 5 iterative rounds of revision. None of the candidate behaviors were eliminated, and 1 ("Option to step out") was added to the "Respect and Value" domain. There was near-perfect consensus on the language of the final tool, with mean approval scores of 3.0 for all but 1 subdomain ("Introductions"), which had a mean score of 2.83 for minor grammatical edits; these were incorporated in the final assessment tool.
Conclusions: We created a novel simulation assessment tool based on a literature review, key stakeholder input, and a consensus of national experts through a modified Delphi method. Our final simulation assessment tool is behaviorally anchored, can be completed by a simulated participant or observer, and may serve to educate healthcare teams engaged in pediatric resuscitations regarding patient- and family-centered communication.
{"title":"Development of a Tool to Evaluate Emotional Support for Patients and Families During Simulated Pediatric Resuscitations: A Modified Delphi Study.","authors":"Ellen L Duncan, Joanne M Agnant, Selin T Sagalowsky","doi":"10.1097/SIH.0000000000000866","DOIUrl":"10.1097/SIH.0000000000000866","url":null,"abstract":"<p><strong>Background: </strong>Families overwhelmingly want to be present during pediatric resuscitations, and their presence offers myriad benefits. However, there is little evidence on how to teach and assess key patient- and family-centered communication behaviors. Our objective was to apply a modified Delphi methodology to develop and refine a simulation-based assessment tool focusing on crucial behaviors for healthcare providers providing emotional support to patients and families during pediatric medical resuscitations.</p><p><strong>Methods: </strong>We identified 4 behavioral domains and 14 subdomains through a literature review, focus groups with our institution's Family and Youth Advisory Councils, and adaptation of existing simulation-based communication assessment tools. A panel of 9 national experts conducted rounds of iterative revision and rating of candidate behaviors for inclusion, and we calculated mean approval ratings (1 = Do not include; 2 = Include with modifications; 3 = Include as is) for each subdomain.</p><p><strong>Results: </strong>Experts engaged in 5 iterative rounds of revision. None of the candidate behaviors were eliminated, and 1 (\"Option to step out\") was added to the \"Respect and Value\" domain. There was near-perfect consensus on the language of the final tool, with mean approval scores of 3.0 for all but 1 subdomain (\"Introductions\"), which had a mean score of 2.83 for minor grammatical edits; these were incorporated in the final assessment tool.</p><p><strong>Conclusions: </strong>We created a novel simulation assessment tool based on a literature review, key stakeholder input, and a consensus of national experts through a modified Delphi method. Our final simulation assessment tool is behaviorally anchored, can be completed by a simulated participant or observer, and may serve to educate healthcare teams engaged in pediatric resuscitations regarding patient- and family-centered communication.</p>","PeriodicalId":49517,"journal":{"name":"Simulation in Healthcare-Journal of the Society for Simulation in Healthcare","volume":" ","pages":"372-378"},"PeriodicalIF":2.1,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144477557","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-01Epub Date: 2025-07-09DOI: 10.1097/SIH.0000000000000864
Jarred P Williams, Melita Macdonald, Peter A Watts, Brad F Peckler
Introduction: Ultrasound-guided intravenous (USIV) cannulation is a common alternative when IV access cannot otherwise be obtained. Many hospitals teach this skill with the commercial CAE Blue Phantom gelatinous training blocks. However, their cost is a barrier. This has led to experimentation with creative alternatives. Recent studies have trialed SCOBY (Symbiotic Culture of Bacteria and Yeast) in the production of training models for medical procedures. SCOBY is a biofilm-like structure appearing as a thick, rubbery film. We aimed to develop a 2-vessel SCOBY-based model and compare its effectiveness for teaching USIV against the Phantom.
Methods: Participants, 23 emergency medicine clinicians, performed USIV on each model and completed a pre- and post-procedure questionnaire.
Results: Seventy-four percent of participants indicated that the SCOBY model more closely resembled the clinical reality of human tissue compared with 13% for the Phantom. SCOBY provided an improved visual appearance, physical touch, feel of the procedure, and appearance of "subdermal tissues" on ultrasound compared to the Phantom.
Conclusion: These results suggest a promising future for SCOBY as a cost-effective alternative to teaching clinical skills.
导读:超声引导静脉(USIV)插管是一种常见的选择,当静脉访问不能以其他方式获得。许多医院用商用CAE Blue Phantom凝胶训练块教授这项技能。然而,它们的成本是一个障碍。这导致了创造性替代方案的试验。最近的研究已经试验了SCOBY(细菌和酵母的共生培养)在医疗程序训练模型的生产。SCOBY是一种类似生物膜的结构,看起来像一层厚厚的橡胶膜。我们的目标是开发一个基于scoby的2船模型,并比较其与Phantom的教学USIV的有效性。方法:23名急诊临床医生对每个模型进行USIV测试,并完成术前和术后问卷调查。结果:74%的参与者表示SCOBY模型更接近临床真实的人体组织,而幻影模型只有13%。与Phantom相比,SCOBY提供了更好的视觉外观、物理触感、手术感觉和超声“皮下组织”外观。结论:这些结果表明,SCOBY作为一种具有成本效益的替代临床技能教学方法具有广阔的前景。
{"title":"Comparative Evaluation of Blue Phantom and SCOBY-Based Models for Ultrasound-Guided Intravenous Cannulation Training.","authors":"Jarred P Williams, Melita Macdonald, Peter A Watts, Brad F Peckler","doi":"10.1097/SIH.0000000000000864","DOIUrl":"10.1097/SIH.0000000000000864","url":null,"abstract":"<p><strong>Introduction: </strong>Ultrasound-guided intravenous (USIV) cannulation is a common alternative when IV access cannot otherwise be obtained. Many hospitals teach this skill with the commercial CAE Blue Phantom gelatinous training blocks. However, their cost is a barrier. This has led to experimentation with creative alternatives. Recent studies have trialed SCOBY (Symbiotic Culture of Bacteria and Yeast) in the production of training models for medical procedures. SCOBY is a biofilm-like structure appearing as a thick, rubbery film. We aimed to develop a 2-vessel SCOBY-based model and compare its effectiveness for teaching USIV against the Phantom.</p><p><strong>Methods: </strong>Participants, 23 emergency medicine clinicians, performed USIV on each model and completed a pre- and post-procedure questionnaire.</p><p><strong>Results: </strong>Seventy-four percent of participants indicated that the SCOBY model more closely resembled the clinical reality of human tissue compared with 13% for the Phantom. SCOBY provided an improved visual appearance, physical touch, feel of the procedure, and appearance of \"subdermal tissues\" on ultrasound compared to the Phantom.</p><p><strong>Conclusion: </strong>These results suggest a promising future for SCOBY as a cost-effective alternative to teaching clinical skills.</p>","PeriodicalId":49517,"journal":{"name":"Simulation in Healthcare-Journal of the Society for Simulation in Healthcare","volume":" ","pages":"406-412"},"PeriodicalIF":2.1,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144250549","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-01Epub Date: 2025-05-07DOI: 10.1097/SIH.0000000000000863
Mahrokh M Kobeissi, Alice M Teall, Heather M Jones, Katherine E Chike-Harris, F Shawn Galin, Jacqueline LaManna, Julianne Ossege, Laura Reed, Tedra Smith, Kristin Stankard, Carolyn Rutledge
Summary statement: The exponential growth of telehealth in health care, triggered by the COVID-19 pandemic, has necessitated updates to educational standards including the integration of telehealth competencies in academic curricula to prepare students for technology-enabled clinical practice. Simulation-based experiences (SBEs) are a valuable pedagogical tool for teaching and assessing telehealth skills in safe and controlled virtual learning environments. Simulated or standardized patients (SPs) are an essential component of SBEs for creating high-quality and engaging learning experiences. SPs in telehealth environments must learn to manage technical interfaces, modify communication for virtual interactions, and convey physical ailments without in-person contact. SP educators and teaching faculty have a valuable role in preparing SPs to effectively portray authentic and consistent telehealth roles while navigating technology and maintaining case fidelity. SP educators contribute critical expertise in SP methodology and are essential collaborators in the development, implementation, and evaluation of telehealth simulation programs. Telehealth SBEs have unique considerations, workflows, and technologies that differ from in-person encounters, and the complexities of these differences underscore the critical need for specialized training approaches for creating authentic and effective telehealth simulations. Formal published resources for training SPs in telehealth contexts remain limited. This article provides guidance to support comprehensive simulation programs delivering telehealth education, specifically emphasizing SP methodology for remote settings.
{"title":"Best Practice Guidelines for Preparing Simulated Patients for Telehealth Simulation.","authors":"Mahrokh M Kobeissi, Alice M Teall, Heather M Jones, Katherine E Chike-Harris, F Shawn Galin, Jacqueline LaManna, Julianne Ossege, Laura Reed, Tedra Smith, Kristin Stankard, Carolyn Rutledge","doi":"10.1097/SIH.0000000000000863","DOIUrl":"10.1097/SIH.0000000000000863","url":null,"abstract":"<p><strong>Summary statement: </strong>The exponential growth of telehealth in health care, triggered by the COVID-19 pandemic, has necessitated updates to educational standards including the integration of telehealth competencies in academic curricula to prepare students for technology-enabled clinical practice. Simulation-based experiences (SBEs) are a valuable pedagogical tool for teaching and assessing telehealth skills in safe and controlled virtual learning environments. Simulated or standardized patients (SPs) are an essential component of SBEs for creating high-quality and engaging learning experiences. SPs in telehealth environments must learn to manage technical interfaces, modify communication for virtual interactions, and convey physical ailments without in-person contact. SP educators and teaching faculty have a valuable role in preparing SPs to effectively portray authentic and consistent telehealth roles while navigating technology and maintaining case fidelity. SP educators contribute critical expertise in SP methodology and are essential collaborators in the development, implementation, and evaluation of telehealth simulation programs. Telehealth SBEs have unique considerations, workflows, and technologies that differ from in-person encounters, and the complexities of these differences underscore the critical need for specialized training approaches for creating authentic and effective telehealth simulations. Formal published resources for training SPs in telehealth contexts remain limited. This article provides guidance to support comprehensive simulation programs delivering telehealth education, specifically emphasizing SP methodology for remote settings.</p>","PeriodicalId":49517,"journal":{"name":"Simulation in Healthcare-Journal of the Society for Simulation in Healthcare","volume":" ","pages":"388-398"},"PeriodicalIF":2.1,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144059075","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-01Epub Date: 2025-06-09DOI: 10.1097/SIH.0000000000000859
Laura R Joyce, Maggie Meeks, Susan G Somerville
Introduction: Effective debriefing is a key element of simulation-based learning, providing an opportunity to facilitate critical reflection and promote constructive conversations, with generalization of the learning experience to real-life health care and collaborative practice. Co-debriefing, meaning a debrief involving more than 1 simulation facilitator, has potential benefits as well as challenges. Interprofessional co-debriefing, where 2 or more members of different professional groups debrief together, has not yet been fully explored in the literature.
Methods: A qualitative approach was used to explore the benefits and challenges of interprofessional co-debriefing from a simulation faculty perspective. Individual semistructured interviews were recorded and transcribed, with data analyzed using reflexive thematic analysis.
Results: Ten interviews were conducted with health care professionals in Christchurch, New Zealand, who co-debrief simulation with faculty from other professions. Three major themes were identified: 1. Developing Debriefers-simulation faculty require opportunities to develop interprofessional co-debriefing skills; 2. Teaming and Collaboration-bringing co-debriefing teams together, role modeling interprofessional collaboration; 3. Logistics and Sustainability-top-down institutional/bottom-up champion support is required to overcome logistical barriers of bringing together multiple professional groups . The reported benefits and challenges of interprofessional co-debriefing were linked to these themes.
Conclusions: This interprofessional group of simulation debriefers identified a number of benefits to interprofessional co-debriefing, along with several challenges. Debriefers require support to develop as role models of interprofessional collaboration. Peer mentoring and faculty development opportunities, along with consideration of the logistics that make this model of debriefing sustainable are needed for this nascent field of simulation-based education practice to evolve and mature.
{"title":"Interprofessional Co-Debriefing in Simulation-Role Modeling Collaboration: A Qualitative Study.","authors":"Laura R Joyce, Maggie Meeks, Susan G Somerville","doi":"10.1097/SIH.0000000000000859","DOIUrl":"10.1097/SIH.0000000000000859","url":null,"abstract":"<p><strong>Introduction: </strong>Effective debriefing is a key element of simulation-based learning, providing an opportunity to facilitate critical reflection and promote constructive conversations, with generalization of the learning experience to real-life health care and collaborative practice. Co-debriefing, meaning a debrief involving more than 1 simulation facilitator, has potential benefits as well as challenges. Interprofessional co-debriefing, where 2 or more members of different professional groups debrief together, has not yet been fully explored in the literature.</p><p><strong>Methods: </strong>A qualitative approach was used to explore the benefits and challenges of interprofessional co-debriefing from a simulation faculty perspective. Individual semistructured interviews were recorded and transcribed, with data analyzed using reflexive thematic analysis.</p><p><strong>Results: </strong>Ten interviews were conducted with health care professionals in Christchurch, New Zealand, who co-debrief simulation with faculty from other professions. Three major themes were identified: 1. Developing Debriefers-simulation faculty require opportunities to develop interprofessional co-debriefing skills; 2. Teaming and Collaboration-bringing co-debriefing teams together, role modeling interprofessional collaboration; 3. Logistics and Sustainability-top-down institutional/bottom-up champion support is required to overcome logistical barriers of bringing together multiple professional groups . The reported benefits and challenges of interprofessional co-debriefing were linked to these themes.</p><p><strong>Conclusions: </strong>This interprofessional group of simulation debriefers identified a number of benefits to interprofessional co-debriefing, along with several challenges. Debriefers require support to develop as role models of interprofessional collaboration. Peer mentoring and faculty development opportunities, along with consideration of the logistics that make this model of debriefing sustainable are needed for this nascent field of simulation-based education practice to evolve and mature.</p>","PeriodicalId":49517,"journal":{"name":"Simulation in Healthcare-Journal of the Society for Simulation in Healthcare","volume":" ","pages":"357-365"},"PeriodicalIF":2.1,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144250550","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-01Epub Date: 2025-07-07DOI: 10.1097/SIH.0000000000000870
Cynthia J Mosher
Summary statement: This special article explores the transformative contributions of Dr. Howard S. Barrows to health professions education, focusing on his pioneering development of two seminal methodologies: problem-based learning and standardized patients. Drawing on Barrows's work, educational literature, and the reflections of Gayle Gliva-McConvey, a leading pioneer in standardized patient methodology and close collaborator of Dr. Barrows, this article provides an in-depth historical account of how these innovations reshaped curriculum design, clinical reasoning, and simulation-based assessment. It also discusses the global adoption, theoretical underpinnings, and enduring impact of these learner-centered strategies, which continue to shape health professions education today.
摘要声明:这篇特别文章探讨了Howard S. Barrows博士对卫生专业教育的变革性贡献,重点介绍了他对两种开创性方法的开创性发展:基于问题的学习和标准化患者。根据Barrows的工作,教育文献,以及Gayle Gliva-McConvey(标准化患者方法的先驱和Barrows博士的密切合作者)的反思,本文提供了这些创新如何重塑课程设计,临床推理和基于模拟的评估的深入历史描述。它还讨论了这些以学习者为中心的战略的全球采用,理论基础和持久影响,这些战略继续塑造今天的卫生专业教育。
{"title":"Dr. Howard S. Barrows: Innovator of the Standardized Patient and Problem-Based Learning Revolutions in Health Professions Education.","authors":"Cynthia J Mosher","doi":"10.1097/SIH.0000000000000870","DOIUrl":"10.1097/SIH.0000000000000870","url":null,"abstract":"<p><strong>Summary statement: </strong>This special article explores the transformative contributions of Dr. Howard S. Barrows to health professions education, focusing on his pioneering development of two seminal methodologies: problem-based learning and standardized patients. Drawing on Barrows's work, educational literature, and the reflections of Gayle Gliva-McConvey, a leading pioneer in standardized patient methodology and close collaborator of Dr. Barrows, this article provides an in-depth historical account of how these innovations reshaped curriculum design, clinical reasoning, and simulation-based assessment. It also discusses the global adoption, theoretical underpinnings, and enduring impact of these learner-centered strategies, which continue to shape health professions education today.</p>","PeriodicalId":49517,"journal":{"name":"Simulation in Healthcare-Journal of the Society for Simulation in Healthcare","volume":" ","pages":"419-423"},"PeriodicalIF":2.1,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144576758","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-01Epub Date: 2025-03-26DOI: 10.1097/SIH.0000000000000854
Christina R Miller, Sara K Greer, Serkan Toy, Adam Schiavi
Introduction: Cognitive load (CL) theory provides a framework for optimizing learning in simulation. Measures of CL components (intrinsic [IL], extraneous [EL] and germane [GL]) may inform simulation design but lack validity evidence. The optimal timing for CL assessment and contributions of debriefing to CL are not established.
Methods: This prospective observational study assessed self-reported CL for first-year anesthesiology residents during 10 individual-learner simulations. Following each simulation and before debriefing, participants completed 4 CL measures: Paas scale, National Aeronautics and Space Administration-Task Load Index (NASA-TLX), Cognitive Load Component questionnaire (CLC) and Cognitive Load Assessment Scales in Simulation (CLAS-Sim). After debriefing, participants repeated the Paas and CLAS-Sim.
Results: Twenty-nine first-year anesthesiology residents participated. Correlations were significant among all total CL measures ( r range = 0.51-0.69) and between CLC and CLAS-Sim IL (r = 0.66), EL (r = 0.41), and GL (r = 0.61) (all P < 0.01). We observed a significant interaction between total CL measures and case complexity, and a significant main effect of case complexity for CLC and CLAS-Sim IL, with no main effect for IL measure. The CLAS-Sim EL was higher ( P = 0.001) than respective CLC scales across cases, with no difference for GL. Participants reported higher CLAS-Sim GL after (versus before) debriefing ( P < 0.001), with no difference in IL, EL, or Paas scores.
Conclusions: This study provides further validity evidence for the CLAS-Sim and demonstrates generalizability in a different population of medical trainees. The CLAS-Sim GL increases following debriefing, reflecting expected learning, demonstrating initial GL scale validity evidence.
{"title":"Debriefing Is Germane to Simulation-Based Learning: Parsing Cognitive Load Components and the Effect of Debriefing.","authors":"Christina R Miller, Sara K Greer, Serkan Toy, Adam Schiavi","doi":"10.1097/SIH.0000000000000854","DOIUrl":"10.1097/SIH.0000000000000854","url":null,"abstract":"<p><strong>Introduction: </strong>Cognitive load (CL) theory provides a framework for optimizing learning in simulation. Measures of CL components (intrinsic [IL], extraneous [EL] and germane [GL]) may inform simulation design but lack validity evidence. The optimal timing for CL assessment and contributions of debriefing to CL are not established.</p><p><strong>Methods: </strong>This prospective observational study assessed self-reported CL for first-year anesthesiology residents during 10 individual-learner simulations. Following each simulation and before debriefing, participants completed 4 CL measures: Paas scale, National Aeronautics and Space Administration-Task Load Index (NASA-TLX), Cognitive Load Component questionnaire (CLC) and Cognitive Load Assessment Scales in Simulation (CLAS-Sim). After debriefing, participants repeated the Paas and CLAS-Sim.</p><p><strong>Results: </strong>Twenty-nine first-year anesthesiology residents participated. Correlations were significant among all total CL measures ( r range = 0.51-0.69) and between CLC and CLAS-Sim IL (r = 0.66), EL (r = 0.41), and GL (r = 0.61) (all P < 0.01). We observed a significant interaction between total CL measures and case complexity, and a significant main effect of case complexity for CLC and CLAS-Sim IL, with no main effect for IL measure. The CLAS-Sim EL was higher ( P = 0.001) than respective CLC scales across cases, with no difference for GL. Participants reported higher CLAS-Sim GL after (versus before) debriefing ( P < 0.001), with no difference in IL, EL, or Paas scores.</p><p><strong>Conclusions: </strong>This study provides further validity evidence for the CLAS-Sim and demonstrates generalizability in a different population of medical trainees. The CLAS-Sim GL increases following debriefing, reflecting expected learning, demonstrating initial GL scale validity evidence.</p>","PeriodicalId":49517,"journal":{"name":"Simulation in Healthcare-Journal of the Society for Simulation in Healthcare","volume":" ","pages":"349-356"},"PeriodicalIF":2.1,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143711908","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-01Epub Date: 2025-05-06DOI: 10.1097/SIH.0000000000000861
Eury Hong, Sundes Kazmir, Benjamin Dylik, Marc Auerbach, Matteo Rosati, Sofia Athanasopoulou, Russell Himmelstein, Travis M Whitfill, Lindsay Johnston, Traci A Wolbrink, Arielle Shibi Rosen, Isabel T Gross
Introduction: Facilitating debriefings in simulation is a complex task with high task load. The increasing availability of generative artificial intelligence (AI) offers an opportunity to support facilitators. We explored simulation facilitation and debriefing strategies using a large language model (LLM) to decrease facilitators' task load and allow for a more comprehensive debrief.
Methods: This prospective, observational, simulation-based pilot study was conducted at Yale University School of Medicine. For each simulation, a debriefing script was generated by passing a real-time transcription of the simulation case as input to the GPT-4o LLM. Thereafter, facilitators and learners completed surveys and task workload assessments. The primary outcome was the task workload as measured by the NASA-TLX scale. The secondary outcome was the perception of the AI technologies in the simulation, measured with survey-based questions.
Results: This study involved four facilitators and 25 learners, with all data being self-reported. All showed strong enthusiasm for AI integration, with mean Likert scores of 4.75/5 and 4.0/5, respectively. NASA-TLX scores revealed moderate to high mental demand for facilitators (M = .8/21; SD = 6.4) and learners (M = 9.9/21; SD = 4.5). AI was perceived to help maintain focus (M = 4.8/5), support learning objectives (M = 4.2/5), and minimize distractions for both facilitators (M = 4.6/5) and teams (M = 4.5/5).
Conclusions: This study highlights LLM integration in aiding debriefing by organizing complex information. Though facilitators reported a considerable task load, findings suggest that LLM can enhance simulation-based debrief quality, while there remains a continuous need for human oversight.
{"title":"Exploring the Use of a Large Language Model in Simulation Debriefing: An Observational Simulation-Based Pilot Study.","authors":"Eury Hong, Sundes Kazmir, Benjamin Dylik, Marc Auerbach, Matteo Rosati, Sofia Athanasopoulou, Russell Himmelstein, Travis M Whitfill, Lindsay Johnston, Traci A Wolbrink, Arielle Shibi Rosen, Isabel T Gross","doi":"10.1097/SIH.0000000000000861","DOIUrl":"10.1097/SIH.0000000000000861","url":null,"abstract":"<p><strong>Introduction: </strong>Facilitating debriefings in simulation is a complex task with high task load. The increasing availability of generative artificial intelligence (AI) offers an opportunity to support facilitators. We explored simulation facilitation and debriefing strategies using a large language model (LLM) to decrease facilitators' task load and allow for a more comprehensive debrief.</p><p><strong>Methods: </strong>This prospective, observational, simulation-based pilot study was conducted at Yale University School of Medicine. For each simulation, a debriefing script was generated by passing a real-time transcription of the simulation case as input to the GPT-4o LLM. Thereafter, facilitators and learners completed surveys and task workload assessments. The primary outcome was the task workload as measured by the NASA-TLX scale. The secondary outcome was the perception of the AI technologies in the simulation, measured with survey-based questions.</p><p><strong>Results: </strong>This study involved four facilitators and 25 learners, with all data being self-reported. All showed strong enthusiasm for AI integration, with mean Likert scores of 4.75/5 and 4.0/5, respectively. NASA-TLX scores revealed moderate to high mental demand for facilitators (M = .8/21; SD = 6.4) and learners (M = 9.9/21; SD = 4.5). AI was perceived to help maintain focus (M = 4.8/5), support learning objectives (M = 4.2/5), and minimize distractions for both facilitators (M = 4.6/5) and teams (M = 4.5/5).</p><p><strong>Conclusions: </strong>This study highlights LLM integration in aiding debriefing by organizing complex information. Though facilitators reported a considerable task load, findings suggest that LLM can enhance simulation-based debrief quality, while there remains a continuous need for human oversight.</p>","PeriodicalId":49517,"journal":{"name":"Simulation in Healthcare-Journal of the Society for Simulation in Healthcare","volume":" ","pages":"366-371"},"PeriodicalIF":2.1,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144039477","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}