医学教育中可持续电子学习的加权绩效评分模型的操作化:专家判断的启示

IF 2.4 Q1 EDUCATION & EDUCATIONAL RESEARCH Electronic Journal of e-Learning Pub Date : 2024-07-19 DOI:10.34190/ejel.22.8.3427
Deborah Oluwadele, Yashik Singh, Timothy T. Adeliyi
{"title":"医学教育中可持续电子学习的加权绩效评分模型的操作化:专家判断的启示","authors":"Deborah Oluwadele, Yashik Singh, Timothy T. Adeliyi","doi":"10.34190/ejel.22.8.3427","DOIUrl":null,"url":null,"abstract":"Validation is needed for any newly developed model or framework because it requires several real-life applications. The investment made into e-learning in medical education is daunting, as is the expectation for a positive return on investment. The medical education domain requires data-wise implementation of e-learning as the debate continues about the fitness of e-learning in medical education. The domain seldom employs frameworks or models to evaluate students' performance in e-learning contexts. However, when utilized, the Kirkpatrick evaluation model is a common choice. This model has faced significant criticism for its failure to incorporate constructs that assess technology and its influence on learning. This paper aims to assess the efficiency of a model developed to determine the effectiveness of e-learning in medical education, specifically targeting student performance. The model was validated through Delphi-based Expert Judgement Techniques (EJT), and Cronbach's alpha was used to determine the reliability of the proposed model. Simple Correspondence Analysis (SCA) was used to measure if stability is reached among experts. Fourteen experts, professors, senior lecturers, and researchers with an average of 12 years of experience in designing and evaluating students' performance in e-learning in medical education participated in the evaluation of the model based on two rounds of questionnaires developed to operationalize the constructs of the model. During the first round, the model had 64 % agreement from all experts; however, 100% agreement was achieved after the second round, with all statements achieving an average of 52% strong agreement and 48% agreement from all 14 experts; the evaluation dimension had the most substantial agreements, next to the design dimension. The results suggest that the model is valid and may be applied as Key Performance Metrics when designing and evaluating e-learning courses in medical education.","PeriodicalId":46105,"journal":{"name":"Electronic Journal of e-Learning","volume":null,"pages":null},"PeriodicalIF":2.4000,"publicationDate":"2024-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Operationalizing a Weighted Performance Scoring Model for Sustainable e-Learning in Medical Education: Insights from Expert Judgement\",\"authors\":\"Deborah Oluwadele, Yashik Singh, Timothy T. Adeliyi\",\"doi\":\"10.34190/ejel.22.8.3427\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Validation is needed for any newly developed model or framework because it requires several real-life applications. The investment made into e-learning in medical education is daunting, as is the expectation for a positive return on investment. The medical education domain requires data-wise implementation of e-learning as the debate continues about the fitness of e-learning in medical education. The domain seldom employs frameworks or models to evaluate students' performance in e-learning contexts. However, when utilized, the Kirkpatrick evaluation model is a common choice. This model has faced significant criticism for its failure to incorporate constructs that assess technology and its influence on learning. This paper aims to assess the efficiency of a model developed to determine the effectiveness of e-learning in medical education, specifically targeting student performance. The model was validated through Delphi-based Expert Judgement Techniques (EJT), and Cronbach's alpha was used to determine the reliability of the proposed model. Simple Correspondence Analysis (SCA) was used to measure if stability is reached among experts. Fourteen experts, professors, senior lecturers, and researchers with an average of 12 years of experience in designing and evaluating students' performance in e-learning in medical education participated in the evaluation of the model based on two rounds of questionnaires developed to operationalize the constructs of the model. During the first round, the model had 64 % agreement from all experts; however, 100% agreement was achieved after the second round, with all statements achieving an average of 52% strong agreement and 48% agreement from all 14 experts; the evaluation dimension had the most substantial agreements, next to the design dimension. The results suggest that the model is valid and may be applied as Key Performance Metrics when designing and evaluating e-learning courses in medical education.\",\"PeriodicalId\":46105,\"journal\":{\"name\":\"Electronic Journal of e-Learning\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":2.4000,\"publicationDate\":\"2024-07-19\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Electronic Journal of e-Learning\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.34190/ejel.22.8.3427\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"EDUCATION & EDUCATIONAL RESEARCH\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Electronic Journal of e-Learning","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.34190/ejel.22.8.3427","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"EDUCATION & EDUCATIONAL RESEARCH","Score":null,"Total":0}
引用次数: 0

摘要

任何新开发的模型或框架都需要验证,因为它需要若干实际应用。对医学教育中电子学习的投资是巨大的,对投资回报的期望也是巨大的。医学教育领域需要以数据为依据实施电子学习,因为关于电子学习是否适合医学教育的争论仍在继续。该领域很少采用框架或模型来评估学生在电子学习环境中的表现。不过,在使用时,柯克帕特里克评价模型是一种常见的选择。该模型因未能纳入评估技术及其对学习影响的建构因素而饱受批评。本文旨在评估为确定医学教育中电子学习的有效性而开发的模型的效率,特别是针对学生的表现。该模型通过基于德尔菲的专家判断技术(EJT)进行了验证,并使用克朗巴赫α来确定所提模型的可靠性。使用简单对应分析法(SCA)来衡量专家之间是否达成稳定。14 位专家、教授、高级讲师和研究人员参加了模型评估,他们在设计和评估学生在医学教育网络学习中的表现方面平均拥有 12 年的经验。在第一轮问卷调查中,所有专家对模型的同意率为 64%;但在第二轮问卷调查后,所有专家对模型的同意率达到了 100%,所有 14 位专家对所有陈述的平均同意率为 52%,其中有 48%的专家表示非常同意;评价维度的同意率最高,仅次于设计维度。结果表明,该模型是有效的,可在设计和评估医学教育电子学习课程时用作关键绩效指标。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Operationalizing a Weighted Performance Scoring Model for Sustainable e-Learning in Medical Education: Insights from Expert Judgement
Validation is needed for any newly developed model or framework because it requires several real-life applications. The investment made into e-learning in medical education is daunting, as is the expectation for a positive return on investment. The medical education domain requires data-wise implementation of e-learning as the debate continues about the fitness of e-learning in medical education. The domain seldom employs frameworks or models to evaluate students' performance in e-learning contexts. However, when utilized, the Kirkpatrick evaluation model is a common choice. This model has faced significant criticism for its failure to incorporate constructs that assess technology and its influence on learning. This paper aims to assess the efficiency of a model developed to determine the effectiveness of e-learning in medical education, specifically targeting student performance. The model was validated through Delphi-based Expert Judgement Techniques (EJT), and Cronbach's alpha was used to determine the reliability of the proposed model. Simple Correspondence Analysis (SCA) was used to measure if stability is reached among experts. Fourteen experts, professors, senior lecturers, and researchers with an average of 12 years of experience in designing and evaluating students' performance in e-learning in medical education participated in the evaluation of the model based on two rounds of questionnaires developed to operationalize the constructs of the model. During the first round, the model had 64 % agreement from all experts; however, 100% agreement was achieved after the second round, with all statements achieving an average of 52% strong agreement and 48% agreement from all 14 experts; the evaluation dimension had the most substantial agreements, next to the design dimension. The results suggest that the model is valid and may be applied as Key Performance Metrics when designing and evaluating e-learning courses in medical education.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Electronic Journal of e-Learning
Electronic Journal of e-Learning EDUCATION & EDUCATIONAL RESEARCH-
CiteScore
5.90
自引率
18.20%
发文量
34
审稿时长
20 weeks
期刊最新文献
Exploring Student and AI Generated Texts: Reflections on Reflection Texts Technostress Impact on Educator Productivity: Gender Differences in Jordan's Higher Education Quo Vadis, University? A Roadmap for AI and Ethics in Higher Education Examining Student Characteristics, Self-Regulated Learning Strategies, and Their Perceived Effects on Satisfaction and Academic Performance in MOOCs Operationalizing a Weighted Performance Scoring Model for Sustainable e-Learning in Medical Education: Insights from Expert Judgement
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1