{"title":"学生同意QM®评估标准作为在线课程质量的基准","authors":"Abdulaziz Sanosi","doi":"10.34190/ejel.21.3.2954","DOIUrl":null,"url":null,"abstract":"Many factors should be considered when planning a profitable Online Learning (OL) experience. Of these factors, quality is the most noticeable concern that received considerable debate. Over the years, several suggestions for standards for ensuring online course quality have been suggested. Among these, Quality Matters (QM) is the most used and principally accepted rubric for quality assurance. Much research explored its potential and impact on maintaining online course quality, yet more research is needed to parallel the expansion of online learning post-COVID-19 pandemic. Additionally, as more students are involved in fully OL classes, it is perceived that their perceptions of QM would be more authentic as they are stemmed from actual experience. To this end, the present study explores students’ perspectives towards QM rubrics as a benchmark for measuring OL course quality. The study adopted a mixed method where quantitative data were gathered by surveying 112 university students using a QM-based questionnaire of 42 items. Using average scores of the participant responses to the questionnaire, the researcher compared their evaluation to the QM general and specific standards. Furthermore, focus-group interviews were conducted to validate and justify the quantitative data. Frequencies of mentioning the most and least important standards were calculated. The findings revealed that the participants agreed to 71% of the QM rubrics. On the other hand, they overvalued standards related to learners’ privacy, course introduction, assessment, and course technology while undervalued standards associated with learning objectives, learner support and accessibility. The participants’ justifications for their judgments revolved around the importance of privacy in cyberspace, the vitality of online assessment tools, and their familiarity with the new technologies that made IT support a secondary standard for them. These results imply reconsidering OL course quality by focusing more on using variable technologies and tools that engage students in the experience, ensure their privacy, and facilitate their interaction with the course content. Further research that utilises larger samples and involves QM-based OL courses is suggested to support the present findings.","PeriodicalId":46105,"journal":{"name":"Electronic Journal of e-Learning","volume":null,"pages":null},"PeriodicalIF":2.4000,"publicationDate":"2023-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Students’ Agreement with QM® Rubrics as Benchmarks for Online Course Quality\",\"authors\":\"Abdulaziz Sanosi\",\"doi\":\"10.34190/ejel.21.3.2954\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Many factors should be considered when planning a profitable Online Learning (OL) experience. Of these factors, quality is the most noticeable concern that received considerable debate. Over the years, several suggestions for standards for ensuring online course quality have been suggested. Among these, Quality Matters (QM) is the most used and principally accepted rubric for quality assurance. Much research explored its potential and impact on maintaining online course quality, yet more research is needed to parallel the expansion of online learning post-COVID-19 pandemic. Additionally, as more students are involved in fully OL classes, it is perceived that their perceptions of QM would be more authentic as they are stemmed from actual experience. To this end, the present study explores students’ perspectives towards QM rubrics as a benchmark for measuring OL course quality. The study adopted a mixed method where quantitative data were gathered by surveying 112 university students using a QM-based questionnaire of 42 items. Using average scores of the participant responses to the questionnaire, the researcher compared their evaluation to the QM general and specific standards. Furthermore, focus-group interviews were conducted to validate and justify the quantitative data. Frequencies of mentioning the most and least important standards were calculated. The findings revealed that the participants agreed to 71% of the QM rubrics. On the other hand, they overvalued standards related to learners’ privacy, course introduction, assessment, and course technology while undervalued standards associated with learning objectives, learner support and accessibility. The participants’ justifications for their judgments revolved around the importance of privacy in cyberspace, the vitality of online assessment tools, and their familiarity with the new technologies that made IT support a secondary standard for them. These results imply reconsidering OL course quality by focusing more on using variable technologies and tools that engage students in the experience, ensure their privacy, and facilitate their interaction with the course content. Further research that utilises larger samples and involves QM-based OL courses is suggested to support the present findings.\",\"PeriodicalId\":46105,\"journal\":{\"name\":\"Electronic Journal of e-Learning\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":2.4000,\"publicationDate\":\"2023-07-24\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Electronic Journal of e-Learning\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.34190/ejel.21.3.2954\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"EDUCATION & EDUCATIONAL RESEARCH\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Electronic Journal of e-Learning","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.34190/ejel.21.3.2954","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"EDUCATION & EDUCATIONAL RESEARCH","Score":null,"Total":0}
Students’ Agreement with QM® Rubrics as Benchmarks for Online Course Quality
Many factors should be considered when planning a profitable Online Learning (OL) experience. Of these factors, quality is the most noticeable concern that received considerable debate. Over the years, several suggestions for standards for ensuring online course quality have been suggested. Among these, Quality Matters (QM) is the most used and principally accepted rubric for quality assurance. Much research explored its potential and impact on maintaining online course quality, yet more research is needed to parallel the expansion of online learning post-COVID-19 pandemic. Additionally, as more students are involved in fully OL classes, it is perceived that their perceptions of QM would be more authentic as they are stemmed from actual experience. To this end, the present study explores students’ perspectives towards QM rubrics as a benchmark for measuring OL course quality. The study adopted a mixed method where quantitative data were gathered by surveying 112 university students using a QM-based questionnaire of 42 items. Using average scores of the participant responses to the questionnaire, the researcher compared their evaluation to the QM general and specific standards. Furthermore, focus-group interviews were conducted to validate and justify the quantitative data. Frequencies of mentioning the most and least important standards were calculated. The findings revealed that the participants agreed to 71% of the QM rubrics. On the other hand, they overvalued standards related to learners’ privacy, course introduction, assessment, and course technology while undervalued standards associated with learning objectives, learner support and accessibility. The participants’ justifications for their judgments revolved around the importance of privacy in cyberspace, the vitality of online assessment tools, and their familiarity with the new technologies that made IT support a secondary standard for them. These results imply reconsidering OL course quality by focusing more on using variable technologies and tools that engage students in the experience, ensure their privacy, and facilitate their interaction with the course content. Further research that utilises larger samples and involves QM-based OL courses is suggested to support the present findings.