Assessing Telemedicine Competencies: Developing and Validating Learner Measures for Simulation-Based Telemedicine Training.

AMIA ... Annual Symposium proceedings. AMIA Symposium Pub Date : 2024-01-11 eCollection Date: 2023-01-01
Blake Lesselroth, Helen Monkman, Ryan Palmer, Craig Kuziemsky, Andrew Liew, Kristin Foulks, Deirdra Kelly, Ainsly Wolfinbarger, Frances Wen, Liz Kollaja, Shannon Ijams, Juell Homco
{"title":"Assessing Telemedicine Competencies: Developing and Validating Learner Measures for Simulation-Based Telemedicine Training.","authors":"Blake Lesselroth, Helen Monkman, Ryan Palmer, Craig Kuziemsky, Andrew Liew, Kristin Foulks, Deirdra Kelly, Ainsly Wolfinbarger, Frances Wen, Liz Kollaja, Shannon Ijams, Juell Homco","doi":"","DOIUrl":null,"url":null,"abstract":"<p><p>In 2021, the Association of American Medical Colleges published Telehealth Competencies Across the Learning Continuum, a roadmap for designing telemedicine curricula and evaluating learners. While this document advances educators' shared understanding of telemedicine's core content and performance expectations, it does not include turn-key-ready evaluation instruments. At the University of Oklahoma School of Community Medicine, we developed a year-long telemedicine curriculum for third-year medical and second-year physician assistant students. We used the AAMC framework to create program objectives and instructional simulations. We designed and piloted an assessment rubric for eight AAMC competencies to accompany the simulations. In this monograph, we describe the rubric development, scores for students participating in simulations, and results comparing inter-rater reliability between faculty and standardized patient evaluators. Our preliminary work suggests that our rubric provides a practical method for evaluating learners by faculty during telemedicine simulations. We also identified opportunities for additional reliability and validity testing.</p>","PeriodicalId":72180,"journal":{"name":"AMIA ... Annual Symposium proceedings. AMIA Symposium","volume":"2023 ","pages":"474-483"},"PeriodicalIF":0.0000,"publicationDate":"2024-01-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10785836/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"AMIA ... Annual Symposium proceedings. AMIA Symposium","FirstCategoryId":"1085","ListUrlMain":"","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2023/1/1 0:00:00","PubModel":"eCollection","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

In 2021, the Association of American Medical Colleges published Telehealth Competencies Across the Learning Continuum, a roadmap for designing telemedicine curricula and evaluating learners. While this document advances educators' shared understanding of telemedicine's core content and performance expectations, it does not include turn-key-ready evaluation instruments. At the University of Oklahoma School of Community Medicine, we developed a year-long telemedicine curriculum for third-year medical and second-year physician assistant students. We used the AAMC framework to create program objectives and instructional simulations. We designed and piloted an assessment rubric for eight AAMC competencies to accompany the simulations. In this monograph, we describe the rubric development, scores for students participating in simulations, and results comparing inter-rater reliability between faculty and standardized patient evaluators. Our preliminary work suggests that our rubric provides a practical method for evaluating learners by faculty during telemedicine simulations. We also identified opportunities for additional reliability and validity testing.

分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
评估远程医疗能力:为基于模拟的远程医疗培训开发和验证学习者衡量标准。
2021 年,美国医学院协会出版了《远程医疗能力与学习连续性》(Telehealth Competencies Across the Learning Continuum),这是设计远程医疗课程和评估学习者的路线图。虽然这份文件促进了教育工作者对远程医疗核心内容和绩效预期的共同理解,但它并不包括可随时使用的评估工具。在俄克拉荷马大学社区医学院,我们为三年级医学生和二年级助理医师学生开发了为期一年的远程医疗课程。我们使用 AAMC 框架创建了课程目标和教学模拟。我们设计并试行了 AAMC 八项能力的评估标准,以配合模拟教学。在这本专著中,我们介绍了评分标准的开发、参与模拟教学的学生的得分,以及教师和标准化病人评估者之间相互评分可靠性的比较结果。我们的初步工作表明,我们的评分标准为教师在远程医疗模拟中评估学习者提供了一种实用的方法。我们还发现了进行更多可靠性和有效性测试的机会。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Ethicara for Responsible AI in Healthcare: A System for Bias Detection and AI Risk Management. Towards Fair Patient-Trial Matching via Patient-Criterion Level Fairness Constraint. Towards Understanding the Generalization of Medical Text-to-SQL Models and Datasets. Transferable and Interpretable Treatment Effectiveness Prediction for Ovarian Cancer via Multimodal Deep Learning. Understanding Cancer Caregiving and Predicting Burden: An Analytics and Machine Learning Approach.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1