{"title":"Back to basics for student satisfaction: improving learning rather than constructing fatuous rankings","authors":"L. Harvey","doi":"10.1080/13538322.2022.2050477","DOIUrl":null,"url":null,"abstract":"There is growing concern expressed in this journal and elsewhere about the misdirection of student feedback processes. ‘Feedback’ in this sense refers to the expressed opinions of students about the service they receive as students. This may include perceptions about the learning and teaching, course organisation, learning support and environment. The problem is that feedback seems increasingly to have become a ritualistic process that results in very little if any action and, is thereby, decried as of little value. Student indifference because of the formulaic nature of the feedback and the failure to see any changes enacted only serves to reinforce the pointlessness of the process. The problem, though, is not the indifference or contempt with the process. That is the symptom. The problem is the lack of desire to use student views to make changes compounded by the obsession with standardisation of questions in fatuous national surveys. Standardising student feedback is the enemy of improvement. It misses the whole point. It facilitates ludicrous and entirely pointless rankings. Student feedback is a serious matter that provides the basis for a fundamental exploration of what works and what doesn’t work for students. It is not about creating league tables or rating teachers. Student feedback is fundamentally about making changes to the student experience at a level that improves the experience for students: teaching and learning at a programme level, general facilities at a university level. It is time to return to using student feedback as an improvement tool. Complacent and relatively meaningless one-size-fits-all surveys used to rank entire institutions are misleading, especially to prospective students, for whose benefit the obsession with league tables is supposedly aimed. Zineldin et al. (2011), for example, showed that in their study that the ten critical components of student satisfaction, in order of importance, were as follows: (1) cleanliness of classrooms (2) cleanliness of toilets (3) the skill of the professors attending the class (4) politeness of professors (5) physical appearance of professors and assistants (6) responsiveness of the professors to students’ needs and questions (7) cleanliness of the food court (8) physical appearance of classrooms (9) politeness of assistants (10) the sense of physical security the students felt on the university campus. Not many of these criteria are likely to have prominence in national surveys that have not engaged with student views before the questionnaire is constructed. While this list may be ‘idiosyncratic’ of the specific study, it is indicative of the variability of student perspectives and their considerable variance from the bland and generic statements that are found in national surveys.","PeriodicalId":46354,"journal":{"name":"Quality in Higher Education","volume":null,"pages":null},"PeriodicalIF":1.1000,"publicationDate":"2022-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Quality in Higher Education","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1080/13538322.2022.2050477","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"EDUCATION & EDUCATIONAL RESEARCH","Score":null,"Total":0}
引用次数: 4
Abstract
There is growing concern expressed in this journal and elsewhere about the misdirection of student feedback processes. ‘Feedback’ in this sense refers to the expressed opinions of students about the service they receive as students. This may include perceptions about the learning and teaching, course organisation, learning support and environment. The problem is that feedback seems increasingly to have become a ritualistic process that results in very little if any action and, is thereby, decried as of little value. Student indifference because of the formulaic nature of the feedback and the failure to see any changes enacted only serves to reinforce the pointlessness of the process. The problem, though, is not the indifference or contempt with the process. That is the symptom. The problem is the lack of desire to use student views to make changes compounded by the obsession with standardisation of questions in fatuous national surveys. Standardising student feedback is the enemy of improvement. It misses the whole point. It facilitates ludicrous and entirely pointless rankings. Student feedback is a serious matter that provides the basis for a fundamental exploration of what works and what doesn’t work for students. It is not about creating league tables or rating teachers. Student feedback is fundamentally about making changes to the student experience at a level that improves the experience for students: teaching and learning at a programme level, general facilities at a university level. It is time to return to using student feedback as an improvement tool. Complacent and relatively meaningless one-size-fits-all surveys used to rank entire institutions are misleading, especially to prospective students, for whose benefit the obsession with league tables is supposedly aimed. Zineldin et al. (2011), for example, showed that in their study that the ten critical components of student satisfaction, in order of importance, were as follows: (1) cleanliness of classrooms (2) cleanliness of toilets (3) the skill of the professors attending the class (4) politeness of professors (5) physical appearance of professors and assistants (6) responsiveness of the professors to students’ needs and questions (7) cleanliness of the food court (8) physical appearance of classrooms (9) politeness of assistants (10) the sense of physical security the students felt on the university campus. Not many of these criteria are likely to have prominence in national surveys that have not engaged with student views before the questionnaire is constructed. While this list may be ‘idiosyncratic’ of the specific study, it is indicative of the variability of student perspectives and their considerable variance from the bland and generic statements that are found in national surveys.
这本杂志和其他地方对学生反馈过程的误导表达了越来越多的担忧。在这个意义上,“反馈”是指学生对他们作为学生所接受的服务所表达的意见。这可能包括对学习和教学、课程组织、学习支持和环境的看法。问题在于,反馈似乎越来越成为一种仪式性的过程,导致很少(如果有的话)行动,因此被谴责为没有什么价值。由于反馈的公式化性质,学生们的漠不关心,以及没有看到任何变化的实施,只会加强这个过程的无意义性。然而,问题不在于对这个过程的冷漠或蔑视。这就是症状。问题在于缺乏利用学生意见进行改革的意愿,再加上对愚蠢的全国性调查中问题标准化的痴迷。标准化学生的反馈是进步的敌人。它没有抓住要点。它促进了可笑和毫无意义的排名。学生的反馈是一件严肃的事情,它为探索什么对学生有效,什么对学生无效提供了基础。这不是制作排行榜或给教师打分。学生的反馈从根本上是关于在一定程度上改变学生的体验,从而改善学生的体验:在课程层面上的教学和学习,在大学层面上的一般设施。是时候把学生的反馈作为一种改进工具了。用于对整个院校进行排名的自满且相对没有意义的“一刀切”调查具有误导性,尤其是对未来的学生来说,对排名的痴迷据称是为了他们的利益。例如,Zineldin et al.(2011)在他们的研究中表明,学生满意度的十个关键组成部分,按重要性排序如下:(1)教室的清洁度(2)厕所的清洁度(3)教授上课的技巧(4)教授的礼貌(5)教授和助手的仪表(6)教授对学生的需求和问题的反应(7)美食广场的清洁度(8)教室的仪表(9)助手的礼貌(10)学生在大学校园里感受到的人身安全感。这些标准中没有多少可能在国家调查中占有突出地位,因为在编制问卷之前没有听取学生的意见。虽然这份清单可能是特定研究的“特质”,但它表明了学生观点的可变性,以及他们与国家调查中发现的平淡无奇和通用陈述的巨大差异。
期刊介绍:
Quality in Higher Education is aimed at those interested in the theory, practice and policies relating to the control, management and improvement of quality in higher education. The journal is receptive to critical, phenomenological as well as positivistic studies. The journal would like to publish more studies that use hermeneutic, semiotic, ethnographic or dialectical research as well as the more traditional studies based on quantitative surveys and in-depth interviews and focus groups. Papers that have empirical research content are particularly welcome. The editor especially wishes to encourage papers on: reported research results, especially where these assess the impact of quality assurance systems, procedures and methodologies; theoretical analyses of quality and quality initiatives in higher education; comparative evaluation and international aspects of practice and policy with a view to identifying transportable methods, systems and good practice; quality assurance and standards monitoring of transnational higher education; the nature and impact and student feedback; improvements in learning and teaching that impact on quality and standards; links between quality assurance and employability; evaluations of the impact of quality procedures at national level, backed up by research evidence.