A. Shiani, S. M. Ahmadi, Ghobad Ramezani, F. Darabi, Forough Zanganeh, Farhad Salari
{"title":"The Role of Qualitative and Quantitative Feedback on Faculties’ Quality of Writing Multiple Choice Questions","authors":"A. Shiani, S. M. Ahmadi, Ghobad Ramezani, F. Darabi, Forough Zanganeh, Farhad Salari","doi":"10.5812/erms-119114","DOIUrl":null,"url":null,"abstract":"Background: Multiple choice questions (MCQs) are the most common questions in clinical tests. Content validity and appropriate structure of the questions are always outstanding issues for each education system. This study aimed to evaluate the role of providing quantitative and qualitative feedback on the quality of faculty members’ MCQs. Methods: This analytical study was conducted on Kermanshah University of Medical Sciences faculty members using the total MCQs test at least two times from 2018 to 2021. The quantitative data, including the validity of the tests, difficulty, and discrimination indices, were collected using a computer algorithm by experts. Results: The second analysis revealed that 14 (27.5%) faculty members had credit scores below 0.4, which was within the acceptable range for the overall validity of the test. The results showed a higher difficulty index in the second feedback than the first (0.46 ± 0.21 vs 0.55 ± 0.21, P = 0.30). No significant difference was found in the discrimination index (0.24 ± 0.1.25 vs 0.24 ± 0.10, P = 0.006). Furthermore, there were no significant differences in terms of taxonomy I (61.29 ± 20.84 vs 59.32 ± 22.11, P = 0.54), II (29.71 ± 17.84 vs 32.76 ± 18.82 P = 0.39), and III (8.50 ± 16.60 vs 7.36 ± 14.48, P = 0 .44) before and after feedback. Conclusions: Based on the results, the questions were not ideal regarding Bloom’s taxonomy standards and the difficulty and discrimination indexes. Furthermore, providing feedback alone is not enough, and proper planning by the educational and medical development centers’ authorities is required to empower the faculty members in this area.","PeriodicalId":32200,"journal":{"name":"Educational Research in Medical Sciences","volume":" ","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2023-02-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Educational Research in Medical Sciences","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.5812/erms-119114","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Background: Multiple choice questions (MCQs) are the most common questions in clinical tests. Content validity and appropriate structure of the questions are always outstanding issues for each education system. This study aimed to evaluate the role of providing quantitative and qualitative feedback on the quality of faculty members’ MCQs. Methods: This analytical study was conducted on Kermanshah University of Medical Sciences faculty members using the total MCQs test at least two times from 2018 to 2021. The quantitative data, including the validity of the tests, difficulty, and discrimination indices, were collected using a computer algorithm by experts. Results: The second analysis revealed that 14 (27.5%) faculty members had credit scores below 0.4, which was within the acceptable range for the overall validity of the test. The results showed a higher difficulty index in the second feedback than the first (0.46 ± 0.21 vs 0.55 ± 0.21, P = 0.30). No significant difference was found in the discrimination index (0.24 ± 0.1.25 vs 0.24 ± 0.10, P = 0.006). Furthermore, there were no significant differences in terms of taxonomy I (61.29 ± 20.84 vs 59.32 ± 22.11, P = 0.54), II (29.71 ± 17.84 vs 32.76 ± 18.82 P = 0.39), and III (8.50 ± 16.60 vs 7.36 ± 14.48, P = 0 .44) before and after feedback. Conclusions: Based on the results, the questions were not ideal regarding Bloom’s taxonomy standards and the difficulty and discrimination indexes. Furthermore, providing feedback alone is not enough, and proper planning by the educational and medical development centers’ authorities is required to empower the faculty members in this area.
背景:多项选择题是临床测试中最常见的问题。问题的内容有效性和适当的结构一直是每个教育系统悬而未决的问题。本研究旨在评估提供定量和定性反馈对教师MCQ质量的作用。方法:本分析研究于2018年至2021年对克尔曼沙医学科学大学的教职员工进行了至少两次MCQ测试。专家们使用计算机算法收集了定量数据,包括测试的有效性、难度和判别指数。结果:第二项分析显示,14名(27.5%)教职员工的信用评分低于0.4,在测试总体有效性的可接受范围内。结果显示,第二次反馈的难度指数高于第一次反馈(0.46±0.21 vs 0.55±0.21,P=0.030)。判别指数无显著差异(0.24±0.1.25 vs 0.24±0.10,P=0.006)。此外,在分类I方面也没有显著差异(61.29±20.84 vs 59.32±22.11,P=0.054),II(29.71±17.84 vs 32.76±18.82 P=0.39)和III(8.50±16.60 vs 7.36±14.48,P=0.44)。结论:根据研究结果,Bloom的分类标准、难度和判别指数方面的问题并不理想。此外,仅仅提供反馈是不够的,需要教育和医疗发展中心当局进行适当的规划,以增强该领域的教师能力。