基于模拟的医学教育的实时汇报评估(DART)工具。

IF 2.8 Q2 HEALTH CARE SCIENCES & SERVICES Advances in simulation (London, England) Pub Date : 2023-03-14 DOI:10.1186/s41077-023-00248-1
Kaushik Baliga, Louis P Halamek, Sandra Warburton, Divya Mathias, Nicole K Yamada, Janene H Fuerch, Andrew Coggins
{"title":"基于模拟的医学教育的实时汇报评估(DART)工具。","authors":"Kaushik Baliga,&nbsp;Louis P Halamek,&nbsp;Sandra Warburton,&nbsp;Divya Mathias,&nbsp;Nicole K Yamada,&nbsp;Janene H Fuerch,&nbsp;Andrew Coggins","doi":"10.1186/s41077-023-00248-1","DOIUrl":null,"url":null,"abstract":"<p><strong>Background: </strong>Debriefing is crucial for enhancing learning following healthcare simulation. Various validated tools have been shown to have contextual value for assessing debriefers. The Debriefing Assessment in Real Time (DART) tool may offer an alternative or additional assessment of conversational dynamics during debriefings.</p><p><strong>Methods: </strong>This is a multi-method international study investigating reliability and validity. Enrolled raters (n = 12) were active simulation educators. Following tool training, the raters were asked to score a mixed sample of debriefings. Descriptive statistics are recorded, with coefficient of variation (CV%) and Cronbach's α used to estimate reliability. Raters returned a detailed reflective survey following their contribution. Kane's framework was used to construct validity arguments.</p><p><strong>Results: </strong>The 8 debriefings (μ = 15.4 min (SD 2.7)) included 45 interdisciplinary learners at various levels of training. Reliability (mean CV%) for key components was as follows: instructor questions μ = 14.7%, instructor statements μ = 34.1%, and trainee responses μ = 29.0%. Cronbach α ranged from 0.852 to 0.978 across the debriefings. Post-experience responses suggested that DARTs can highlight suboptimal practices including unqualified lecturing by debriefers.</p><p><strong>Conclusion: </strong>The DART demonstrated acceptable reliability and may have a limited role in assessment of healthcare simulation debriefing. Inherent complexity and emergent properties of debriefing practice should be accounted for when using this tool.</p>","PeriodicalId":72108,"journal":{"name":"Advances in simulation (London, England)","volume":"8 1","pages":"9"},"PeriodicalIF":2.8000,"publicationDate":"2023-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10013984/pdf/","citationCount":"0","resultStr":"{\"title\":\"The Debriefing Assessment in Real Time (DART) tool for simulation-based medical education.\",\"authors\":\"Kaushik Baliga,&nbsp;Louis P Halamek,&nbsp;Sandra Warburton,&nbsp;Divya Mathias,&nbsp;Nicole K Yamada,&nbsp;Janene H Fuerch,&nbsp;Andrew Coggins\",\"doi\":\"10.1186/s41077-023-00248-1\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><strong>Background: </strong>Debriefing is crucial for enhancing learning following healthcare simulation. Various validated tools have been shown to have contextual value for assessing debriefers. The Debriefing Assessment in Real Time (DART) tool may offer an alternative or additional assessment of conversational dynamics during debriefings.</p><p><strong>Methods: </strong>This is a multi-method international study investigating reliability and validity. Enrolled raters (n = 12) were active simulation educators. Following tool training, the raters were asked to score a mixed sample of debriefings. Descriptive statistics are recorded, with coefficient of variation (CV%) and Cronbach's α used to estimate reliability. Raters returned a detailed reflective survey following their contribution. Kane's framework was used to construct validity arguments.</p><p><strong>Results: </strong>The 8 debriefings (μ = 15.4 min (SD 2.7)) included 45 interdisciplinary learners at various levels of training. Reliability (mean CV%) for key components was as follows: instructor questions μ = 14.7%, instructor statements μ = 34.1%, and trainee responses μ = 29.0%. Cronbach α ranged from 0.852 to 0.978 across the debriefings. Post-experience responses suggested that DARTs can highlight suboptimal practices including unqualified lecturing by debriefers.</p><p><strong>Conclusion: </strong>The DART demonstrated acceptable reliability and may have a limited role in assessment of healthcare simulation debriefing. Inherent complexity and emergent properties of debriefing practice should be accounted for when using this tool.</p>\",\"PeriodicalId\":72108,\"journal\":{\"name\":\"Advances in simulation (London, England)\",\"volume\":\"8 1\",\"pages\":\"9\"},\"PeriodicalIF\":2.8000,\"publicationDate\":\"2023-03-14\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10013984/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Advances in simulation (London, England)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1186/s41077-023-00248-1\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"HEALTH CARE SCIENCES & SERVICES\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Advances in simulation (London, England)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1186/s41077-023-00248-1","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"HEALTH CARE SCIENCES & SERVICES","Score":null,"Total":0}
引用次数: 0

摘要

背景:汇报对加强医疗模拟后的学习至关重要。各种经过验证的工具已被证明对评估汇报者具有上下文价值。汇报实时评估(DART)工具可以为汇报期间的会话动态提供一种替代或额外的评估。方法:采用多方法进行信度和效度的国际研究。登记的评分者(n = 12)是积极的模拟教育者。在工具培训之后,评分者被要求对汇报的混合样本进行评分。描述性统计记录,用变异系数(CV%)和Cronbach’s α估计信度。评价者在他们的贡献之后返回了一份详细的反思性调查。凯恩的框架被用来构造有效性论证。结果:8次汇报(μ = 15.4 min (SD 2.7))包括45名不同训练水平的跨学科学习者。关键成分的信度(平均CV%)如下:教员提问μ = 14.7%,教员陈述μ = 34.1%,学员回答μ = 29.0%。在整个汇报过程中,Cronbach α在0.852 ~ 0.978之间。经验后的反应表明,dart可以突出次优实践,包括汇报者不合格的演讲。结论:DART具有可接受的可靠性,在医疗模拟述职报告评估中可能具有有限的作用。在使用此工具时,应考虑到汇报实践的内在复杂性和紧急属性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。

摘要图片

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
The Debriefing Assessment in Real Time (DART) tool for simulation-based medical education.

Background: Debriefing is crucial for enhancing learning following healthcare simulation. Various validated tools have been shown to have contextual value for assessing debriefers. The Debriefing Assessment in Real Time (DART) tool may offer an alternative or additional assessment of conversational dynamics during debriefings.

Methods: This is a multi-method international study investigating reliability and validity. Enrolled raters (n = 12) were active simulation educators. Following tool training, the raters were asked to score a mixed sample of debriefings. Descriptive statistics are recorded, with coefficient of variation (CV%) and Cronbach's α used to estimate reliability. Raters returned a detailed reflective survey following their contribution. Kane's framework was used to construct validity arguments.

Results: The 8 debriefings (μ = 15.4 min (SD 2.7)) included 45 interdisciplinary learners at various levels of training. Reliability (mean CV%) for key components was as follows: instructor questions μ = 14.7%, instructor statements μ = 34.1%, and trainee responses μ = 29.0%. Cronbach α ranged from 0.852 to 0.978 across the debriefings. Post-experience responses suggested that DARTs can highlight suboptimal practices including unqualified lecturing by debriefers.

Conclusion: The DART demonstrated acceptable reliability and may have a limited role in assessment of healthcare simulation debriefing. Inherent complexity and emergent properties of debriefing practice should be accounted for when using this tool.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
5.70
自引率
0.00%
发文量
0
审稿时长
12 weeks
期刊最新文献
Massive open online course: a new strategy for faculty development needs in healthcare simulation. Changing the conversation: impact of guidelines designed to optimize interprofessional facilitation of simulation-based team training. Speech recognition technology for assessing team debriefing communication and interaction patterns: An algorithmic toolkit for healthcare simulation educators. Effectiveness of hybrid simulation training on medical student performance in whole-task consultation of cardiac patients: The ASSIMILATE EXCELLENCE randomized waitlist-controlled trial. Using simulation scenarios and a debriefing structure to promote feedback skills among interprofessional team members in clinical practice.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1