Nick Guimbarda, Faizan Boghani, Matthew Tews, A J Kleinheksel
{"title":"A Comparison of Two Debriefing Rubrics to Assess Facilitator Adherence to the PEARLS Debriefing Framework.","authors":"Nick Guimbarda, Faizan Boghani, Matthew Tews, A J Kleinheksel","doi":"10.1097/SIH.0000000000000798","DOIUrl":null,"url":null,"abstract":"<p><strong>Introduction: </strong>Many educators have adopted the Promoting Excellence and Reflective Learning in Simulation (PEARLS) model to guide debriefing sessions in simulation-based learning. The PEARLS Debriefing Checklist (PDC), a 28-item instrument, and the PEARLS Debriefing Adherence Rubric (PDAR), a 13-item instrument, assess facilitator adherence to the model. The aims of this study were to collect evidence of concurrent validity and to evaluate their unique strengths.</p><p><strong>Methods: </strong>A review of 130 video recorded debriefings from a synchronous high-fidelity mannequin simulation event involving third-year medical students was undertaken. Each debriefing was scored utilizing both instruments. Internal consistency was determined by calculating a Cronbach's α. A Pearson correlation was used to evaluate concurrent validity. Discrimination indices were also calculated.</p><p><strong>Results: </strong>Cronbach's α values were 0.515 and 0.714 for the PDAR and PDC, respectively, with ≥0.70 to ≤0.90 considered to be an acceptable range. The Pearson correlation coefficient for the total sum of the scores of both instruments was 0.648, with a values between ±0.60 and ±0.80 considered strong correlations. All items on the PDAR had positive discrimination indices; 3 items on the PDC had indices ≤0, with values between -0.2 and 0.2 considered unsatisfactory. Four items on both instruments had indices >0.4, indicating only fair discrimination between high and low performers.</p><p><strong>Conclusions: </strong>Both instruments exhibit unique strengths and limitations. The PDC demonstrated greater internal consistency, likely secondary to having more items, with the tradeoff of redundant items and laborious implementation. Both had concurrent validity in nearly all subdomains. The PDAR had proportionally more items with high discrimination and no items with indices ≤0. A revised instrument incorporating PDC items with high reliability and validity and removing those identified as redundant or poor discriminators, the PDAR 2, is proposed.</p>","PeriodicalId":49517,"journal":{"name":"Simulation in Healthcare-Journal of the Society for Simulation in Healthcare","volume":" ","pages":"358-366"},"PeriodicalIF":1.7000,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Simulation in Healthcare-Journal of the Society for Simulation in Healthcare","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1097/SIH.0000000000000798","RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2024/4/24 0:00:00","PubModel":"Epub","JCR":"Q3","JCRName":"HEALTH CARE SCIENCES & SERVICES","Score":null,"Total":0}
引用次数: 0
Abstract
Introduction: Many educators have adopted the Promoting Excellence and Reflective Learning in Simulation (PEARLS) model to guide debriefing sessions in simulation-based learning. The PEARLS Debriefing Checklist (PDC), a 28-item instrument, and the PEARLS Debriefing Adherence Rubric (PDAR), a 13-item instrument, assess facilitator adherence to the model. The aims of this study were to collect evidence of concurrent validity and to evaluate their unique strengths.
Methods: A review of 130 video recorded debriefings from a synchronous high-fidelity mannequin simulation event involving third-year medical students was undertaken. Each debriefing was scored utilizing both instruments. Internal consistency was determined by calculating a Cronbach's α. A Pearson correlation was used to evaluate concurrent validity. Discrimination indices were also calculated.
Results: Cronbach's α values were 0.515 and 0.714 for the PDAR and PDC, respectively, with ≥0.70 to ≤0.90 considered to be an acceptable range. The Pearson correlation coefficient for the total sum of the scores of both instruments was 0.648, with a values between ±0.60 and ±0.80 considered strong correlations. All items on the PDAR had positive discrimination indices; 3 items on the PDC had indices ≤0, with values between -0.2 and 0.2 considered unsatisfactory. Four items on both instruments had indices >0.4, indicating only fair discrimination between high and low performers.
Conclusions: Both instruments exhibit unique strengths and limitations. The PDC demonstrated greater internal consistency, likely secondary to having more items, with the tradeoff of redundant items and laborious implementation. Both had concurrent validity in nearly all subdomains. The PDAR had proportionally more items with high discrimination and no items with indices ≤0. A revised instrument incorporating PDC items with high reliability and validity and removing those identified as redundant or poor discriminators, the PDAR 2, is proposed.
期刊介绍:
Simulation in Healthcare: The Journal of the Society for Simulation in Healthcare is a multidisciplinary publication encompassing all areas of applications and research in healthcare simulation technology. The journal is relevant to a broad range of clinical and biomedical specialties, and publishes original basic, clinical, and translational research on these topics and more: Safety and quality-oriented training programs; Development of educational and competency assessment standards; Reports of experience in the use of simulation technology; Virtual reality; Epidemiologic modeling; Molecular, pharmacologic, and disease modeling.