A Comparison of Two Debriefing Rubrics to Assess Facilitator Adherence to the PEARLS Debriefing Framework.

IF 1.7 3区 医学 Q3 HEALTH CARE SCIENCES & SERVICES Simulation in Healthcare-Journal of the Society for Simulation in Healthcare Pub Date : 2024-12-01 Epub Date: 2024-04-24 DOI:10.1097/SIH.0000000000000798
Nick Guimbarda, Faizan Boghani, Matthew Tews, A J Kleinheksel
{"title":"A Comparison of Two Debriefing Rubrics to Assess Facilitator Adherence to the PEARLS Debriefing Framework.","authors":"Nick Guimbarda, Faizan Boghani, Matthew Tews, A J Kleinheksel","doi":"10.1097/SIH.0000000000000798","DOIUrl":null,"url":null,"abstract":"<p><strong>Introduction: </strong>Many educators have adopted the Promoting Excellence and Reflective Learning in Simulation (PEARLS) model to guide debriefing sessions in simulation-based learning. The PEARLS Debriefing Checklist (PDC), a 28-item instrument, and the PEARLS Debriefing Adherence Rubric (PDAR), a 13-item instrument, assess facilitator adherence to the model. The aims of this study were to collect evidence of concurrent validity and to evaluate their unique strengths.</p><p><strong>Methods: </strong>A review of 130 video recorded debriefings from a synchronous high-fidelity mannequin simulation event involving third-year medical students was undertaken. Each debriefing was scored utilizing both instruments. Internal consistency was determined by calculating a Cronbach's α. A Pearson correlation was used to evaluate concurrent validity. Discrimination indices were also calculated.</p><p><strong>Results: </strong>Cronbach's α values were 0.515 and 0.714 for the PDAR and PDC, respectively, with ≥0.70 to ≤0.90 considered to be an acceptable range. The Pearson correlation coefficient for the total sum of the scores of both instruments was 0.648, with a values between ±0.60 and ±0.80 considered strong correlations. All items on the PDAR had positive discrimination indices; 3 items on the PDC had indices ≤0, with values between -0.2 and 0.2 considered unsatisfactory. Four items on both instruments had indices >0.4, indicating only fair discrimination between high and low performers.</p><p><strong>Conclusions: </strong>Both instruments exhibit unique strengths and limitations. The PDC demonstrated greater internal consistency, likely secondary to having more items, with the tradeoff of redundant items and laborious implementation. Both had concurrent validity in nearly all subdomains. The PDAR had proportionally more items with high discrimination and no items with indices ≤0. A revised instrument incorporating PDC items with high reliability and validity and removing those identified as redundant or poor discriminators, the PDAR 2, is proposed.</p>","PeriodicalId":49517,"journal":{"name":"Simulation in Healthcare-Journal of the Society for Simulation in Healthcare","volume":" ","pages":"358-366"},"PeriodicalIF":1.7000,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Simulation in Healthcare-Journal of the Society for Simulation in Healthcare","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1097/SIH.0000000000000798","RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2024/4/24 0:00:00","PubModel":"Epub","JCR":"Q3","JCRName":"HEALTH CARE SCIENCES & SERVICES","Score":null,"Total":0}
引用次数: 0

Abstract

Introduction: Many educators have adopted the Promoting Excellence and Reflective Learning in Simulation (PEARLS) model to guide debriefing sessions in simulation-based learning. The PEARLS Debriefing Checklist (PDC), a 28-item instrument, and the PEARLS Debriefing Adherence Rubric (PDAR), a 13-item instrument, assess facilitator adherence to the model. The aims of this study were to collect evidence of concurrent validity and to evaluate their unique strengths.

Methods: A review of 130 video recorded debriefings from a synchronous high-fidelity mannequin simulation event involving third-year medical students was undertaken. Each debriefing was scored utilizing both instruments. Internal consistency was determined by calculating a Cronbach's α. A Pearson correlation was used to evaluate concurrent validity. Discrimination indices were also calculated.

Results: Cronbach's α values were 0.515 and 0.714 for the PDAR and PDC, respectively, with ≥0.70 to ≤0.90 considered to be an acceptable range. The Pearson correlation coefficient for the total sum of the scores of both instruments was 0.648, with a values between ±0.60 and ±0.80 considered strong correlations. All items on the PDAR had positive discrimination indices; 3 items on the PDC had indices ≤0, with values between -0.2 and 0.2 considered unsatisfactory. Four items on both instruments had indices >0.4, indicating only fair discrimination between high and low performers.

Conclusions: Both instruments exhibit unique strengths and limitations. The PDC demonstrated greater internal consistency, likely secondary to having more items, with the tradeoff of redundant items and laborious implementation. Both had concurrent validity in nearly all subdomains. The PDAR had proportionally more items with high discrimination and no items with indices ≤0. A revised instrument incorporating PDC items with high reliability and validity and removing those identified as redundant or poor discriminators, the PDAR 2, is proposed.

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
比较两种汇报评分标准,以评估主持人对 PEARLS 汇报框架的遵守情况。
导言:许多教育工作者都采用了 "促进模拟学习中的卓越与反思"(PEARLS)模型来指导模拟学习中的汇报环节。PEARLS 汇报检查表 (PDC) 是一个包含 28 个项目的工具,而 PEARLS 汇报坚持度评分表 (PDAR) 则是一个包含 13 个项目的工具,用于评估促进者对模型的坚持度。本研究的目的是收集并行有效性的证据,并评估它们的独特优势:方法:研究人员审查了由三年级医学生参与的同步高仿真人体模型模拟活动的 130 份视频汇报记录。每次汇报都使用这两种工具进行评分。通过计算 Cronbach's α 来确定内部一致性。皮尔逊相关性用于评估并发有效性。此外,还计算了歧视指数:PDAR和PDC的Cronbach's α值分别为0.515和0.714,≥0.70至≤0.90为可接受范围。两个工具总分之和的皮尔逊相关系数为 0.648,±0.60 至 ±0.80 之间的值被认为是强相关。PDAR 的所有项目都有正的区分度指数;PDC 的 3 个项目的指数≤0,-0.2 到 0.2 之间的数值被认为是不理想的。两个工具中都有 4 个项目的分辨指数大于 0.4,这表明高分者和低分者之间的分辨能力一般:结论:两种工具都显示出独特的优势和局限性。PDC 的内部一致性更高,这可能是因为它的项目更多,但同时也带来了项目冗余和实施费力的问题。两者在几乎所有子域中都具有并发效度。PDAR 具有较高区分度的项目较多,但没有指数≤0 的项目。我们提出了一个修订版工具,即 PDAR 2,它包含了 PDC 中具有较高信度和效度的项目,并删除了那些被认定为多余或区分度较差的项目。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
CiteScore
4.00
自引率
8.30%
发文量
158
审稿时长
6-12 weeks
期刊介绍: Simulation in Healthcare: The Journal of the Society for Simulation in Healthcare is a multidisciplinary publication encompassing all areas of applications and research in healthcare simulation technology. The journal is relevant to a broad range of clinical and biomedical specialties, and publishes original basic, clinical, and translational research on these topics and more: Safety and quality-oriented training programs; Development of educational and competency assessment standards; Reports of experience in the use of simulation technology; Virtual reality; Epidemiologic modeling; Molecular, pharmacologic, and disease modeling.
期刊最新文献
Virtual Monitoring Technician Performance in High-Fidelity Simulations of Remote Patient Monitoring: An Exploratory Study. Bridging Two Worlds: A Basic Scientist's Transformative Path in Healthcare Simulation. Measuring Residents' Competence in Chest Tube Insertion on Thiel-Embalmed Bodies: A Validity Study. Call to Action: Honoring Simulated Participants and Collaborating With Simulated Participant Educators. When Simulation Becomes Reality.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1