'What would my peers say?' Comparing the opinion-based method with the prediction-based method in Continuing Medical Education course evaluation.

Canadian medical education journal Pub Date : 2024-07-12 eCollection Date: 2024-07-01 DOI:10.36834/cmej.77580
Jamie S Chua, Merel van Diepen, Marjolijn D Trietsch, Friedo W Dekker, Johanna Schönrock-Adema, Jacqueline Bustraan
{"title":"'<i>What would my peers say?</i>' Comparing the opinion-based method with the prediction-based method in Continuing Medical Education course evaluation.","authors":"Jamie S Chua, Merel van Diepen, Marjolijn D Trietsch, Friedo W Dekker, Johanna Schönrock-Adema, Jacqueline Bustraan","doi":"10.36834/cmej.77580","DOIUrl":null,"url":null,"abstract":"<p><strong>Background: </strong>Although medical courses are frequently evaluated via surveys with Likert scales ranging from \"<i>strongly agree</i>\" to \"<i>strongly disagree</i>,\" low response rates limit their utility. In undergraduate medical education, a new method with students predicting what their peers would say, required fewer respondents to obtain similar results. However, this prediction-based method lacks validation for continuing medical education (CME), which typically targets a more heterogeneous group than medical students.</p><p><strong>Methods: </strong>In this study, 597 participants of a large CME course were randomly assigned to either express personal opinions on a five-point Likert scale (opinion-based method; <i>n</i> = 300) or to predict the percentage of their peers choosing each Likert scale option (prediction-based method; <i>n</i> = 297). For each question, we calculated the minimum numbers of respondents needed for stable average results using an iterative algorithm. We compared mean scores and the distribution of scores between both methods.</p><p><strong>Results: </strong>The overall response rate was 47%. The prediction-based method required fewer respondents than the opinion-based method for similar average responses. Mean response scores were similar in both groups for most questions, but prediction-based outcomes resulted in fewer extreme responses (strongly agree/disagree).</p><p><strong>Conclusions: </strong>We validated the prediction-based method in evaluating CME. We also provide practical considerations for applying this method.</p>","PeriodicalId":72503,"journal":{"name":"Canadian medical education journal","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-07-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11302746/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Canadian medical education journal","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.36834/cmej.77580","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2024/7/1 0:00:00","PubModel":"eCollection","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Background: Although medical courses are frequently evaluated via surveys with Likert scales ranging from "strongly agree" to "strongly disagree," low response rates limit their utility. In undergraduate medical education, a new method with students predicting what their peers would say, required fewer respondents to obtain similar results. However, this prediction-based method lacks validation for continuing medical education (CME), which typically targets a more heterogeneous group than medical students.

Methods: In this study, 597 participants of a large CME course were randomly assigned to either express personal opinions on a five-point Likert scale (opinion-based method; n = 300) or to predict the percentage of their peers choosing each Likert scale option (prediction-based method; n = 297). For each question, we calculated the minimum numbers of respondents needed for stable average results using an iterative algorithm. We compared mean scores and the distribution of scores between both methods.

Results: The overall response rate was 47%. The prediction-based method required fewer respondents than the opinion-based method for similar average responses. Mean response scores were similar in both groups for most questions, but prediction-based outcomes resulted in fewer extreme responses (strongly agree/disagree).

Conclusions: We validated the prediction-based method in evaluating CME. We also provide practical considerations for applying this method.

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
我的同行会怎么说?在继续医学教育课程评估中比较基于意见的方法和基于预测的方法。
背景:尽管医学课程经常通过李克特量表(从 "非常同意 "到 "非常不同意")进行评估,但低回复率限制了其实用性。在本科医学教育中,一种新方法是让学生预测同伴会说什么,只需要较少的受访者就能获得类似的结果。然而,这种基于预测的方法在继续医学教育(CME)中还缺乏验证,因为继续医学教育的对象通常比医学生更加不同:在这项研究中,我们随机分配了 597 名参加大型继续医学教育课程的学员,让他们在五分李克特量表上发表个人意见(基于意见的方法;n = 300),或者预测他们的同伴选择每个李克特量表选项的百分比(基于预测的方法;n = 297)。对于每个问题,我们都使用迭代算法计算出稳定平均结果所需的最低受访者人数。我们比较了两种方法的平均得分和得分分布:总回复率为 47%。与基于意见的方法相比,基于预测的方法需要更少的受访者来获得相似的平均答复。在大多数问题上,两组的平均回答分数相似,但基于预测的结果导致极端回答(非常同意/不同意)较少:我们对基于预测的继续教育评估方法进行了验证。我们还提供了应用这种方法的实际注意事项。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
审稿时长
18 weeks
期刊最新文献
"Everything new is happening all at once": a qualitative study of early career obstetrician and gynaecologists' preparedness for independent practice. Walk with a Future Doc program allows Canadian medical students to promote physical activity and health education in local communities. 'What would my peers say?' Comparing the opinion-based method with the prediction-based method in Continuing Medical Education course evaluation. Centring equity in medicine: pushback to challenging power. Educational approaches for social accountability in health professions training: a scoping review protocol.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1