Blake Lesselroth, Helen Monkman, Ryan Palmer, Craig Kuziemsky, Andrew Liew, Kristin Foulks, Deirdra Kelly, Ainsly Wolfinbarger, Frances Wen, Liz Kollaja, Shannon Ijams, Juell Homco
{"title":"评估远程医疗能力:为基于模拟的远程医疗培训开发和验证学习者衡量标准。","authors":"Blake Lesselroth, Helen Monkman, Ryan Palmer, Craig Kuziemsky, Andrew Liew, Kristin Foulks, Deirdra Kelly, Ainsly Wolfinbarger, Frances Wen, Liz Kollaja, Shannon Ijams, Juell Homco","doi":"","DOIUrl":null,"url":null,"abstract":"<p><p>In 2021, the Association of American Medical Colleges published Telehealth Competencies Across the Learning Continuum, a roadmap for designing telemedicine curricula and evaluating learners. While this document advances educators' shared understanding of telemedicine's core content and performance expectations, it does not include turn-key-ready evaluation instruments. At the University of Oklahoma School of Community Medicine, we developed a year-long telemedicine curriculum for third-year medical and second-year physician assistant students. We used the AAMC framework to create program objectives and instructional simulations. We designed and piloted an assessment rubric for eight AAMC competencies to accompany the simulations. In this monograph, we describe the rubric development, scores for students participating in simulations, and results comparing inter-rater reliability between faculty and standardized patient evaluators. Our preliminary work suggests that our rubric provides a practical method for evaluating learners by faculty during telemedicine simulations. We also identified opportunities for additional reliability and validity testing.</p>","PeriodicalId":72180,"journal":{"name":"AMIA ... Annual Symposium proceedings. AMIA Symposium","volume":"2023 ","pages":"474-483"},"PeriodicalIF":0.0000,"publicationDate":"2024-01-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10785836/pdf/","citationCount":"0","resultStr":"{\"title\":\"Assessing Telemedicine Competencies: Developing and Validating Learner Measures for Simulation-Based Telemedicine Training.\",\"authors\":\"Blake Lesselroth, Helen Monkman, Ryan Palmer, Craig Kuziemsky, Andrew Liew, Kristin Foulks, Deirdra Kelly, Ainsly Wolfinbarger, Frances Wen, Liz Kollaja, Shannon Ijams, Juell Homco\",\"doi\":\"\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>In 2021, the Association of American Medical Colleges published Telehealth Competencies Across the Learning Continuum, a roadmap for designing telemedicine curricula and evaluating learners. While this document advances educators' shared understanding of telemedicine's core content and performance expectations, it does not include turn-key-ready evaluation instruments. At the University of Oklahoma School of Community Medicine, we developed a year-long telemedicine curriculum for third-year medical and second-year physician assistant students. We used the AAMC framework to create program objectives and instructional simulations. We designed and piloted an assessment rubric for eight AAMC competencies to accompany the simulations. In this monograph, we describe the rubric development, scores for students participating in simulations, and results comparing inter-rater reliability between faculty and standardized patient evaluators. Our preliminary work suggests that our rubric provides a practical method for evaluating learners by faculty during telemedicine simulations. We also identified opportunities for additional reliability and validity testing.</p>\",\"PeriodicalId\":72180,\"journal\":{\"name\":\"AMIA ... Annual Symposium proceedings. AMIA Symposium\",\"volume\":\"2023 \",\"pages\":\"474-483\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-01-11\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10785836/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"AMIA ... Annual Symposium proceedings. AMIA Symposium\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"2023/1/1 0:00:00\",\"PubModel\":\"eCollection\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"AMIA ... Annual Symposium proceedings. AMIA Symposium","FirstCategoryId":"1085","ListUrlMain":"","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2023/1/1 0:00:00","PubModel":"eCollection","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
摘要
2021 年,美国医学院协会出版了《远程医疗能力与学习连续性》(Telehealth Competencies Across the Learning Continuum),这是设计远程医疗课程和评估学习者的路线图。虽然这份文件促进了教育工作者对远程医疗核心内容和绩效预期的共同理解,但它并不包括可随时使用的评估工具。在俄克拉荷马大学社区医学院,我们为三年级医学生和二年级助理医师学生开发了为期一年的远程医疗课程。我们使用 AAMC 框架创建了课程目标和教学模拟。我们设计并试行了 AAMC 八项能力的评估标准,以配合模拟教学。在这本专著中,我们介绍了评分标准的开发、参与模拟教学的学生的得分,以及教师和标准化病人评估者之间相互评分可靠性的比较结果。我们的初步工作表明,我们的评分标准为教师在远程医疗模拟中评估学习者提供了一种实用的方法。我们还发现了进行更多可靠性和有效性测试的机会。
Assessing Telemedicine Competencies: Developing and Validating Learner Measures for Simulation-Based Telemedicine Training.
In 2021, the Association of American Medical Colleges published Telehealth Competencies Across the Learning Continuum, a roadmap for designing telemedicine curricula and evaluating learners. While this document advances educators' shared understanding of telemedicine's core content and performance expectations, it does not include turn-key-ready evaluation instruments. At the University of Oklahoma School of Community Medicine, we developed a year-long telemedicine curriculum for third-year medical and second-year physician assistant students. We used the AAMC framework to create program objectives and instructional simulations. We designed and piloted an assessment rubric for eight AAMC competencies to accompany the simulations. In this monograph, we describe the rubric development, scores for students participating in simulations, and results comparing inter-rater reliability between faculty and standardized patient evaluators. Our preliminary work suggests that our rubric provides a practical method for evaluating learners by faculty during telemedicine simulations. We also identified opportunities for additional reliability and validity testing.