Introduction: Facilitating debriefings in simulation is a complex task with high task load. The increasing availability of generative artificial intelligence (AI) offers an opportunity to support facilitators. We explored simulation facilitation and debriefing strategies using a large language model (LLM) to decrease facilitators' task load and allow for a more comprehensive debrief.
Methods: This prospective, observational, simulation-based pilot study was conducted at Yale University School of Medicine. For each simulation, a debriefing script was generated by passing a real-time transcription of the simulation case as input to the GPT-4o LLM. Thereafter, facilitators and learners completed surveys and task workload assessments. The primary outcome was the task workload as measured by the NASA-TLX scale. The secondary outcome was the perception of the AI technologies in the simulation, measured with survey-based questions.
Results: This study involved four facilitators and 25 learners, with all data being self-reported. All showed strong enthusiasm for AI integration, with mean Likert scores of 4.75/5 and 4.0/5, respectively. NASA-TLX scores revealed moderate to high mental demand for facilitators (M = .8/21; SD = 6.4) and learners (M = 9.9/21; SD = 4.5). AI was perceived to help maintain focus (M = 4.8/5), support learning objectives (M = 4.2/5), and minimize distractions for both facilitators (M = 4.6/5) and teams (M = 4.5/5).
Conclusions: This study highlights LLM integration in aiding debriefing by organizing complex information. Though facilitators reported a considerable task load, findings suggest that LLM can enhance simulation-based debrief quality, while there remains a continuous need for human oversight.
扫码关注我们
求助内容:
应助结果提醒方式:
