Comparing Analytic and Mixed-Approach Rubrics for Academic Poster Quality.

IF 3.8 4区 教育学 Q1 EDUCATION, SCIENTIFIC DISCIPLINES American Journal of Pharmaceutical Education Pub Date : 2025-02-13 DOI:10.1016/j.ajpe.2025.101372
Michael J Peeters, Michael J Gonyeau
{"title":"Comparing Analytic and Mixed-Approach Rubrics for Academic Poster Quality.","authors":"Michael J Peeters, Michael J Gonyeau","doi":"10.1016/j.ajpe.2025.101372","DOIUrl":null,"url":null,"abstract":"<p><strong>Objective: </strong>While there has been great interest in rubrics in recent decades, there are different types (with different advantages and disadvantages). Here, we examined and compared use of analytic rubrics (AR) and mixed-approach rubric (MAR) types to assess quality of research posters at an academic conference.</p><p><strong>Methods: </strong>A prior systematic review identified 12 rubrics; we compared two notable analytic-rubrics (AR1, AR2) with a newer mixed-approach-rubric (MAR). Sixty randomly-selected research posters were downloaded from an academic conference poster repository. Two experienced academicians independently scored all posters using the AR1, AR2 and MAR. Time-to-score was also noted. For inter-rater reliability of scores from each rubric, traditional intraclass correlations as well as modern/advanced Rasch Measurement were examined and compared among AR1, AR2 and MAR.</p><p><strong>Results: </strong>Scores for poster quality varied using all rubrics. For traditional indices of inter-rater reliability, all rubrics had equal or similar intraclass correlations using agreement, while AR1 and AR2 were slightly higher using consistency. The modern Rasch Measurement showed that the single-item MAR reliably separated posters into two distinct groups (low-quality versus high-quality); same as the 9-item AR2, though better than the 9-item AR1. Furthermore, the MAR's single-item rating-scale functioned well, while AR1 had one misfunctioning item rating-scale and AR2 had four misfunctioning item rating-scales. Notably, the MAR was quicker-to-score than the AR1 or AR2.</p><p><strong>Conclusion: </strong>This MAR measured similar or better than two ARs, and was quicker to score. This investigation illuminated common misconceptions that ARs are more accurate and a best use of time for effective measurement.</p>","PeriodicalId":55530,"journal":{"name":"American Journal of Pharmaceutical Education","volume":" ","pages":"101372"},"PeriodicalIF":3.8000,"publicationDate":"2025-02-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"American Journal of Pharmaceutical Education","FirstCategoryId":"95","ListUrlMain":"https://doi.org/10.1016/j.ajpe.2025.101372","RegionNum":4,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"EDUCATION, SCIENTIFIC DISCIPLINES","Score":null,"Total":0}
引用次数: 0

Abstract

Objective: While there has been great interest in rubrics in recent decades, there are different types (with different advantages and disadvantages). Here, we examined and compared use of analytic rubrics (AR) and mixed-approach rubric (MAR) types to assess quality of research posters at an academic conference.

Methods: A prior systematic review identified 12 rubrics; we compared two notable analytic-rubrics (AR1, AR2) with a newer mixed-approach-rubric (MAR). Sixty randomly-selected research posters were downloaded from an academic conference poster repository. Two experienced academicians independently scored all posters using the AR1, AR2 and MAR. Time-to-score was also noted. For inter-rater reliability of scores from each rubric, traditional intraclass correlations as well as modern/advanced Rasch Measurement were examined and compared among AR1, AR2 and MAR.

Results: Scores for poster quality varied using all rubrics. For traditional indices of inter-rater reliability, all rubrics had equal or similar intraclass correlations using agreement, while AR1 and AR2 were slightly higher using consistency. The modern Rasch Measurement showed that the single-item MAR reliably separated posters into two distinct groups (low-quality versus high-quality); same as the 9-item AR2, though better than the 9-item AR1. Furthermore, the MAR's single-item rating-scale functioned well, while AR1 had one misfunctioning item rating-scale and AR2 had four misfunctioning item rating-scales. Notably, the MAR was quicker-to-score than the AR1 or AR2.

Conclusion: This MAR measured similar or better than two ARs, and was quicker to score. This investigation illuminated common misconceptions that ARs are more accurate and a best use of time for effective measurement.

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
目的:近几十年来,人们对评分标准产生了浓厚的兴趣,但评分标准也有不同类型(优缺点各异)。在此,我们研究并比较了分析评分标准(AR)和混合方法评分标准(MAR)的使用情况,以评估学术会议上研究海报的质量:我们比较了两种著名的分析评分标准(AR1、AR2)和一种较新的混合方法评分标准(MAR)。我们从学术会议海报库中随机下载了 60 份研究海报。两位经验丰富的院士使用 AR1、AR2 和 MAR 对所有海报进行独立评分。同时还记录了评分时间。对每个评分标准的评分者之间的可靠性进行了研究,并对 AR1、AR2 和 MAR 之间的传统类内相关性以及现代/高级 Rasch 测量进行了比较:所有评分标准的海报质量得分均有差异。就评分者之间的传统可靠性指标而言,所有评分标准在一致性方面的类内相关性相同或相似,而 AR1 和 AR2 在一致性方面的类内相关性略高。现代 Rasch 测量表明,单项目 MAR 能可靠地将海报分为两个不同的组别(低质量组和高质量组);与 9 个项目的 AR2 相同,但优于 9 个项目的 AR1。此外,MAR 的单项评分量表功能良好,而 AR1 有一个功能错误的项目评分量表,AR2 有四个功能错误的项目评分量表。值得注意的是,MAR 比 AR1 或 AR2 的评分速度更快:结论:MAR 的测量结果与 AR1 和 AR2 相似或更好,而且评分更快。这项调查揭示了人们普遍存在的误解,即年度报告更准确,而且能充分利用时间进行有效测量。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
CiteScore
4.30
自引率
15.20%
发文量
114
期刊介绍: The Journal accepts unsolicited manuscripts that have not been published and are not under consideration for publication elsewhere. The Journal only considers material related to pharmaceutical education for publication. Authors must prepare manuscripts to conform to the Journal style (Author Instructions). All manuscripts are subject to peer review and approval by the editor prior to acceptance for publication. Reviewers are assigned by the editor with the advice of the editorial board as needed. Manuscripts are submitted and processed online (Submit a Manuscript) using Editorial Manager, an online manuscript tracking system that facilitates communication between the editorial office, editor, associate editors, reviewers, and authors. After a manuscript is accepted, it is scheduled for publication in an upcoming issue of the Journal. All manuscripts are formatted and copyedited, and returned to the author for review and approval of the changes. Approximately 2 weeks prior to publication, the author receives an electronic proof of the article for final review and approval. Authors are not assessed page charges for publication.
期刊最新文献
A Scoping Review of Planetary Health Education in Pharmacy Curricula. Comparing Analytic and Mixed-Approach Rubrics for Academic Poster Quality. Threshold concepts as a framework for understanding the internal work in professional identity formation. Exploring the Challenges Student Pharmacists Confront when Learning to Detect Medication-Related Problems in Electronic Health Records: Implications for Instructional Design. Fixed, Systematically Formed versus Continuously Changing Random Team Assignments and Outcomes in a Therapeutics Course.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1