{"title":"Comparing Analytic and Mixed-Approach Rubrics for Academic Poster Quality.","authors":"Michael J Peeters, Michael J Gonyeau","doi":"10.1016/j.ajpe.2025.101372","DOIUrl":null,"url":null,"abstract":"<p><strong>Objective: </strong>While there has been great interest in rubrics in recent decades, there are different types (with different advantages and disadvantages). Here, we examined and compared use of analytic rubrics (AR) and mixed-approach rubric (MAR) types to assess quality of research posters at an academic conference.</p><p><strong>Methods: </strong>A prior systematic review identified 12 rubrics; we compared two notable analytic-rubrics (AR1, AR2) with a newer mixed-approach-rubric (MAR). Sixty randomly-selected research posters were downloaded from an academic conference poster repository. Two experienced academicians independently scored all posters using the AR1, AR2 and MAR. Time-to-score was also noted. For inter-rater reliability of scores from each rubric, traditional intraclass correlations as well as modern/advanced Rasch Measurement were examined and compared among AR1, AR2 and MAR.</p><p><strong>Results: </strong>Scores for poster quality varied using all rubrics. For traditional indices of inter-rater reliability, all rubrics had equal or similar intraclass correlations using agreement, while AR1 and AR2 were slightly higher using consistency. The modern Rasch Measurement showed that the single-item MAR reliably separated posters into two distinct groups (low-quality versus high-quality); same as the 9-item AR2, though better than the 9-item AR1. Furthermore, the MAR's single-item rating-scale functioned well, while AR1 had one misfunctioning item rating-scale and AR2 had four misfunctioning item rating-scales. Notably, the MAR was quicker-to-score than the AR1 or AR2.</p><p><strong>Conclusion: </strong>This MAR measured similar or better than two ARs, and was quicker to score. This investigation illuminated common misconceptions that ARs are more accurate and a best use of time for effective measurement.</p>","PeriodicalId":55530,"journal":{"name":"American Journal of Pharmaceutical Education","volume":" ","pages":"101372"},"PeriodicalIF":3.8000,"publicationDate":"2025-02-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"American Journal of Pharmaceutical Education","FirstCategoryId":"95","ListUrlMain":"https://doi.org/10.1016/j.ajpe.2025.101372","RegionNum":4,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"EDUCATION, SCIENTIFIC DISCIPLINES","Score":null,"Total":0}
引用次数: 0
Abstract
Objective: While there has been great interest in rubrics in recent decades, there are different types (with different advantages and disadvantages). Here, we examined and compared use of analytic rubrics (AR) and mixed-approach rubric (MAR) types to assess quality of research posters at an academic conference.
Methods: A prior systematic review identified 12 rubrics; we compared two notable analytic-rubrics (AR1, AR2) with a newer mixed-approach-rubric (MAR). Sixty randomly-selected research posters were downloaded from an academic conference poster repository. Two experienced academicians independently scored all posters using the AR1, AR2 and MAR. Time-to-score was also noted. For inter-rater reliability of scores from each rubric, traditional intraclass correlations as well as modern/advanced Rasch Measurement were examined and compared among AR1, AR2 and MAR.
Results: Scores for poster quality varied using all rubrics. For traditional indices of inter-rater reliability, all rubrics had equal or similar intraclass correlations using agreement, while AR1 and AR2 were slightly higher using consistency. The modern Rasch Measurement showed that the single-item MAR reliably separated posters into two distinct groups (low-quality versus high-quality); same as the 9-item AR2, though better than the 9-item AR1. Furthermore, the MAR's single-item rating-scale functioned well, while AR1 had one misfunctioning item rating-scale and AR2 had four misfunctioning item rating-scales. Notably, the MAR was quicker-to-score than the AR1 or AR2.
Conclusion: This MAR measured similar or better than two ARs, and was quicker to score. This investigation illuminated common misconceptions that ARs are more accurate and a best use of time for effective measurement.
期刊介绍:
The Journal accepts unsolicited manuscripts that have not been published and are not under consideration for publication elsewhere. The Journal only considers material related to pharmaceutical education for publication. Authors must prepare manuscripts to conform to the Journal style (Author Instructions). All manuscripts are subject to peer review and approval by the editor prior to acceptance for publication. Reviewers are assigned by the editor with the advice of the editorial board as needed. Manuscripts are submitted and processed online (Submit a Manuscript) using Editorial Manager, an online manuscript tracking system that facilitates communication between the editorial office, editor, associate editors, reviewers, and authors.
After a manuscript is accepted, it is scheduled for publication in an upcoming issue of the Journal. All manuscripts are formatted and copyedited, and returned to the author for review and approval of the changes. Approximately 2 weeks prior to publication, the author receives an electronic proof of the article for final review and approval. Authors are not assessed page charges for publication.