Pub Date : 2022-03-31DOI: 10.1080/13538322.2022.2050477
L. Harvey
There is growing concern expressed in this journal and elsewhere about the misdirection of student feedback processes. ‘Feedback’ in this sense refers to the expressed opinions of students about the service they receive as students. This may include perceptions about the learning and teaching, course organisation, learning support and environment. The problem is that feedback seems increasingly to have become a ritualistic process that results in very little if any action and, is thereby, decried as of little value. Student indifference because of the formulaic nature of the feedback and the failure to see any changes enacted only serves to reinforce the pointlessness of the process. The problem, though, is not the indifference or contempt with the process. That is the symptom. The problem is the lack of desire to use student views to make changes compounded by the obsession with standardisation of questions in fatuous national surveys. Standardising student feedback is the enemy of improvement. It misses the whole point. It facilitates ludicrous and entirely pointless rankings. Student feedback is a serious matter that provides the basis for a fundamental exploration of what works and what doesn’t work for students. It is not about creating league tables or rating teachers. Student feedback is fundamentally about making changes to the student experience at a level that improves the experience for students: teaching and learning at a programme level, general facilities at a university level. It is time to return to using student feedback as an improvement tool. Complacent and relatively meaningless one-size-fits-all surveys used to rank entire institutions are misleading, especially to prospective students, for whose benefit the obsession with league tables is supposedly aimed. Zineldin et al. (2011), for example, showed that in their study that the ten critical components of student satisfaction, in order of importance, were as follows: (1) cleanliness of classrooms (2) cleanliness of toilets (3) the skill of the professors attending the class (4) politeness of professors (5) physical appearance of professors and assistants (6) responsiveness of the professors to students’ needs and questions (7) cleanliness of the food court (8) physical appearance of classrooms (9) politeness of assistants (10) the sense of physical security the students felt on the university campus. Not many of these criteria are likely to have prominence in national surveys that have not engaged with student views before the questionnaire is constructed. While this list may be ‘idiosyncratic’ of the specific study, it is indicative of the variability of student perspectives and their considerable variance from the bland and generic statements that are found in national surveys.
这本杂志和其他地方对学生反馈过程的误导表达了越来越多的担忧。在这个意义上,“反馈”是指学生对他们作为学生所接受的服务所表达的意见。这可能包括对学习和教学、课程组织、学习支持和环境的看法。问题在于,反馈似乎越来越成为一种仪式性的过程,导致很少(如果有的话)行动,因此被谴责为没有什么价值。由于反馈的公式化性质,学生们的漠不关心,以及没有看到任何变化的实施,只会加强这个过程的无意义性。然而,问题不在于对这个过程的冷漠或蔑视。这就是症状。问题在于缺乏利用学生意见进行改革的意愿,再加上对愚蠢的全国性调查中问题标准化的痴迷。标准化学生的反馈是进步的敌人。它没有抓住要点。它促进了可笑和毫无意义的排名。学生的反馈是一件严肃的事情,它为探索什么对学生有效,什么对学生无效提供了基础。这不是制作排行榜或给教师打分。学生的反馈从根本上是关于在一定程度上改变学生的体验,从而改善学生的体验:在课程层面上的教学和学习,在大学层面上的一般设施。是时候把学生的反馈作为一种改进工具了。用于对整个院校进行排名的自满且相对没有意义的“一刀切”调查具有误导性,尤其是对未来的学生来说,对排名的痴迷据称是为了他们的利益。例如,Zineldin et al.(2011)在他们的研究中表明,学生满意度的十个关键组成部分,按重要性排序如下:(1)教室的清洁度(2)厕所的清洁度(3)教授上课的技巧(4)教授的礼貌(5)教授和助手的仪表(6)教授对学生的需求和问题的反应(7)美食广场的清洁度(8)教室的仪表(9)助手的礼貌(10)学生在大学校园里感受到的人身安全感。这些标准中没有多少可能在国家调查中占有突出地位,因为在编制问卷之前没有听取学生的意见。虽然这份清单可能是特定研究的“特质”,但它表明了学生观点的可变性,以及他们与国家调查中发现的平淡无奇和通用陈述的巨大差异。
{"title":"Back to basics for student satisfaction: improving learning rather than constructing fatuous rankings","authors":"L. Harvey","doi":"10.1080/13538322.2022.2050477","DOIUrl":"https://doi.org/10.1080/13538322.2022.2050477","url":null,"abstract":"There is growing concern expressed in this journal and elsewhere about the misdirection of student feedback processes. ‘Feedback’ in this sense refers to the expressed opinions of students about the service they receive as students. This may include perceptions about the learning and teaching, course organisation, learning support and environment. The problem is that feedback seems increasingly to have become a ritualistic process that results in very little if any action and, is thereby, decried as of little value. Student indifference because of the formulaic nature of the feedback and the failure to see any changes enacted only serves to reinforce the pointlessness of the process. The problem, though, is not the indifference or contempt with the process. That is the symptom. The problem is the lack of desire to use student views to make changes compounded by the obsession with standardisation of questions in fatuous national surveys. Standardising student feedback is the enemy of improvement. It misses the whole point. It facilitates ludicrous and entirely pointless rankings. Student feedback is a serious matter that provides the basis for a fundamental exploration of what works and what doesn’t work for students. It is not about creating league tables or rating teachers. Student feedback is fundamentally about making changes to the student experience at a level that improves the experience for students: teaching and learning at a programme level, general facilities at a university level. It is time to return to using student feedback as an improvement tool. Complacent and relatively meaningless one-size-fits-all surveys used to rank entire institutions are misleading, especially to prospective students, for whose benefit the obsession with league tables is supposedly aimed. Zineldin et al. (2011), for example, showed that in their study that the ten critical components of student satisfaction, in order of importance, were as follows: (1) cleanliness of classrooms (2) cleanliness of toilets (3) the skill of the professors attending the class (4) politeness of professors (5) physical appearance of professors and assistants (6) responsiveness of the professors to students’ needs and questions (7) cleanliness of the food court (8) physical appearance of classrooms (9) politeness of assistants (10) the sense of physical security the students felt on the university campus. Not many of these criteria are likely to have prominence in national surveys that have not engaged with student views before the questionnaire is constructed. While this list may be ‘idiosyncratic’ of the specific study, it is indicative of the variability of student perspectives and their considerable variance from the bland and generic statements that are found in national surveys.","PeriodicalId":46354,"journal":{"name":"Quality in Higher Education","volume":null,"pages":null},"PeriodicalIF":1.4,"publicationDate":"2022-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76130268","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-03-08DOI: 10.1080/13538322.2022.2042894
Anne L. L. Tang, V. Tung, Caroline Walker-Gleaves, J. Rattray
Abstract This study aimed to examine university students’ perceptions of teacher care overall and in the three constructs of pedagogical care, holistic care and relational care, to consider their inclusion in quality enhancement models. Quantitative research using self-administrated online survey was conducted with undergraduates in Hong Kong, Macau, Republic of Fiji, Taiwan and the United Kingdom. Based on the descriptive and paired-sample t-test analyses, empirical results have shown that university students perceived teacher care as important. They ascribed the highest importance to relational care, followed by pedagogical care and holistic care. This research advocates recognising the importance of teacher care in university education and integrating it into higher education pedagogy. This article proposes a caring quality mechanism for enhancing teaching quality, to address the inadequacy of the audit-focused quality system.
{"title":"Assessing university students’ perceptions of teacher care","authors":"Anne L. L. Tang, V. Tung, Caroline Walker-Gleaves, J. Rattray","doi":"10.1080/13538322.2022.2042894","DOIUrl":"https://doi.org/10.1080/13538322.2022.2042894","url":null,"abstract":"Abstract This study aimed to examine university students’ perceptions of teacher care overall and in the three constructs of pedagogical care, holistic care and relational care, to consider their inclusion in quality enhancement models. Quantitative research using self-administrated online survey was conducted with undergraduates in Hong Kong, Macau, Republic of Fiji, Taiwan and the United Kingdom. Based on the descriptive and paired-sample t-test analyses, empirical results have shown that university students perceived teacher care as important. They ascribed the highest importance to relational care, followed by pedagogical care and holistic care. This research advocates recognising the importance of teacher care in university education and integrating it into higher education pedagogy. This article proposes a caring quality mechanism for enhancing teaching quality, to address the inadequacy of the audit-focused quality system.","PeriodicalId":46354,"journal":{"name":"Quality in Higher Education","volume":null,"pages":null},"PeriodicalIF":1.4,"publicationDate":"2022-03-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80777762","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-03-07DOI: 10.1080/13538322.2022.2037762
L. Harvey
ion Abstraction is usually construed as the distillation of sensory perception of the world of objects into conceptual categories. That is, starting from the (literally) objective world, recurrent or apparently core or defining features are identified until an abstract concept is formulated. Thus, for example, ‘employability’ is construed as the set of ‘skills’ that enable a person to get a job. This process of distillation of some features from a set of observed objects is at the basis of most systems of classification. Critical social research starts from the view that facts do not exist independently of their theoretical context. If facts are not self-evident then concepts cannot be abstracted from them. Critical social research thus works by moving from the abstract to the concrete. It starts with the abstract generalisation and investigates them in a broader context. For example, aggressive behaviour in the home in which a husband pushes, hits or throws things at his wife is encapsulated by the term ‘domestic violence’. Critical social research goes beyond the surface appearance of domestic violence as a set of aggressive acts and reconceptualises it as, for example, an outcome of a patriarchal control. Abstraction, for critical social research, is more than specifying the concrete components, it requires identifying underlying structures, which have been assimilated uncritically into the concept, with the aim of developing a reconstructed concept. Abstraction in critical social research, therefore, differs from the positivist use because, rather than simply providing the basis for ordering appearances and ultimately reifying them, they are used to get beneath the surface of appearances.ion is usually construed as the distillation of sensory perception of the world of objects into conceptual categories. That is, starting from the (literally) objective world, recurrent or apparently core or defining features are identified until an abstract concept is formulated. Thus, for example, ‘employability’ is construed as the set of ‘skills’ that enable a person to get a job. This process of distillation of some features from a set of observed objects is at the basis of most systems of classification. Critical social research starts from the view that facts do not exist independently of their theoretical context. If facts are not self-evident then concepts cannot be abstracted from them. Critical social research thus works by moving from the abstract to the concrete. It starts with the abstract generalisation and investigates them in a broader context. For example, aggressive behaviour in the home in which a husband pushes, hits or throws things at his wife is encapsulated by the term ‘domestic violence’. Critical social research goes beyond the surface appearance of domestic violence as a set of aggressive acts and reconceptualises it as, for example, an outcome of a patriarchal control. Abstraction, for critical social research, is more than specify
{"title":"Critical social research: re-examining quality","authors":"L. Harvey","doi":"10.1080/13538322.2022.2037762","DOIUrl":"https://doi.org/10.1080/13538322.2022.2037762","url":null,"abstract":"ion Abstraction is usually construed as the distillation of sensory perception of the world of objects into conceptual categories. That is, starting from the (literally) objective world, recurrent or apparently core or defining features are identified until an abstract concept is formulated. Thus, for example, ‘employability’ is construed as the set of ‘skills’ that enable a person to get a job. This process of distillation of some features from a set of observed objects is at the basis of most systems of classification. Critical social research starts from the view that facts do not exist independently of their theoretical context. If facts are not self-evident then concepts cannot be abstracted from them. Critical social research thus works by moving from the abstract to the concrete. It starts with the abstract generalisation and investigates them in a broader context. For example, aggressive behaviour in the home in which a husband pushes, hits or throws things at his wife is encapsulated by the term ‘domestic violence’. Critical social research goes beyond the surface appearance of domestic violence as a set of aggressive acts and reconceptualises it as, for example, an outcome of a patriarchal control. Abstraction, for critical social research, is more than specifying the concrete components, it requires identifying underlying structures, which have been assimilated uncritically into the concept, with the aim of developing a reconstructed concept. Abstraction in critical social research, therefore, differs from the positivist use because, rather than simply providing the basis for ordering appearances and ultimately reifying them, they are used to get beneath the surface of appearances.ion is usually construed as the distillation of sensory perception of the world of objects into conceptual categories. That is, starting from the (literally) objective world, recurrent or apparently core or defining features are identified until an abstract concept is formulated. Thus, for example, ‘employability’ is construed as the set of ‘skills’ that enable a person to get a job. This process of distillation of some features from a set of observed objects is at the basis of most systems of classification. Critical social research starts from the view that facts do not exist independently of their theoretical context. If facts are not self-evident then concepts cannot be abstracted from them. Critical social research thus works by moving from the abstract to the concrete. It starts with the abstract generalisation and investigates them in a broader context. For example, aggressive behaviour in the home in which a husband pushes, hits or throws things at his wife is encapsulated by the term ‘domestic violence’. Critical social research goes beyond the surface appearance of domestic violence as a set of aggressive acts and reconceptualises it as, for example, an outcome of a patriarchal control. Abstraction, for critical social research, is more than specify","PeriodicalId":46354,"journal":{"name":"Quality in Higher Education","volume":null,"pages":null},"PeriodicalIF":1.4,"publicationDate":"2022-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88647302","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-02-13DOI: 10.1080/13538322.2021.2017396
Vinit Kumar, Y. Akhter, Gopal Ji
Abstract The newly adopted performance-based evaluation and funding model is one of the recent quality initiatives taken by the University Grants Commission in India, which aims to improve quality in management and administration of federally funded universities in India. The article critically analyses the 2020 released ranking based on this model and raises some critical observations about the methodology and results of the ranking, discussing the key features of the ranking framework. The findings suggest that universities ranked higher in the National Institutional Ranking Framework and National Assessment and Accreditation Council ratings have performed not at par with their ranks in this target-based methodology. Only one university of 40 could find a place in the ‘outstanding’ category and one university was categorised as ‘poor’. The article goes on to discuss the current state and future directions of central universities in India aspiring to join the ‘world-class’ league.
{"title":"Performance-based evaluation and funding model for central universities in India: a preliminary assessment","authors":"Vinit Kumar, Y. Akhter, Gopal Ji","doi":"10.1080/13538322.2021.2017396","DOIUrl":"https://doi.org/10.1080/13538322.2021.2017396","url":null,"abstract":"Abstract The newly adopted performance-based evaluation and funding model is one of the recent quality initiatives taken by the University Grants Commission in India, which aims to improve quality in management and administration of federally funded universities in India. The article critically analyses the 2020 released ranking based on this model and raises some critical observations about the methodology and results of the ranking, discussing the key features of the ranking framework. The findings suggest that universities ranked higher in the National Institutional Ranking Framework and National Assessment and Accreditation Council ratings have performed not at par with their ranks in this target-based methodology. Only one university of 40 could find a place in the ‘outstanding’ category and one university was categorised as ‘poor’. The article goes on to discuss the current state and future directions of central universities in India aspiring to join the ‘world-class’ league.","PeriodicalId":46354,"journal":{"name":"Quality in Higher Education","volume":null,"pages":null},"PeriodicalIF":1.4,"publicationDate":"2022-02-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81679701","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-02-13DOI: 10.1080/13538322.2022.2037182
{"title":"Critical social research: a call for articles","authors":"","doi":"10.1080/13538322.2022.2037182","DOIUrl":"https://doi.org/10.1080/13538322.2022.2037182","url":null,"abstract":"","PeriodicalId":46354,"journal":{"name":"Quality in Higher Education","volume":null,"pages":null},"PeriodicalIF":1.4,"publicationDate":"2022-02-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87495558","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-02-13DOI: 10.1080/13538322.2021.2020978
Anca Greere
Abstract Derived from practical experience, this article outlines key aspects to be considered when developing a training programme focused on quality assurance in higher education and suggests possible measures of success for training delivery. Practice-based professional reflections are systematised into an applicable, duly tested, quality assurance training model that can be transferred and adapted for diverse national and international contexts. The distinction between two training types guides the analysis and allows for differentiated conclusions regarding context, content, delivery and anticipated outcomes. Advanced quality assurance professional development training for appointed quality assurance professionals is compared and contrasted to initial quality assurance capacity building for other stakeholder groups, such as academics, administrators or students. The main aim is to determine underlying principles for training design and delivery to ensure a positive experience that relevantly responds to individual training needs and determines a lasting professional impact.
{"title":"Training for quality assurance in higher education: practical insights for effective design and successful delivery","authors":"Anca Greere","doi":"10.1080/13538322.2021.2020978","DOIUrl":"https://doi.org/10.1080/13538322.2021.2020978","url":null,"abstract":"Abstract Derived from practical experience, this article outlines key aspects to be considered when developing a training programme focused on quality assurance in higher education and suggests possible measures of success for training delivery. Practice-based professional reflections are systematised into an applicable, duly tested, quality assurance training model that can be transferred and adapted for diverse national and international contexts. The distinction between two training types guides the analysis and allows for differentiated conclusions regarding context, content, delivery and anticipated outcomes. Advanced quality assurance professional development training for appointed quality assurance professionals is compared and contrasted to initial quality assurance capacity building for other stakeholder groups, such as academics, administrators or students. The main aim is to determine underlying principles for training design and delivery to ensure a positive experience that relevantly responds to individual training needs and determines a lasting professional impact.","PeriodicalId":46354,"journal":{"name":"Quality in Higher Education","volume":null,"pages":null},"PeriodicalIF":1.4,"publicationDate":"2022-02-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90606351","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-02-09DOI: 10.1080/13538322.2021.2024273
M. Cheng, Jokha Al Shukaili
Abstract Developing offshore programmes provided by private higher education institutions through affiliation agreements with international university partners is a key strategy to assure the quality of programmes in the Sultanate of Oman. However, there is limited research on these programmes from the perspectives of Ministry of Higher Education officials, managers, academics and students of private higher education institutions. This study uses gap analysis to explore these key stakeholders’ expectations and perceptions of the quality of offshore programmes in Oman. It reveals that dependence on International University Partners to monitor their offshore programmes compromises quality because local academics have limited involvement in developing programmes and students get limited feedback on their coursework. Students’ lack of English proficiency also makes them struggle with offshore programme requirements. Key stakeholders argue for embedding Omani cultural values in the offshore programmes without compromising their academic qualification credentials and enhancing students’ learning experience to become global citizens.
{"title":"Ensuring quality of offshore programmes: views and expectations of key stakeholders in Oman","authors":"M. Cheng, Jokha Al Shukaili","doi":"10.1080/13538322.2021.2024273","DOIUrl":"https://doi.org/10.1080/13538322.2021.2024273","url":null,"abstract":"Abstract Developing offshore programmes provided by private higher education institutions through affiliation agreements with international university partners is a key strategy to assure the quality of programmes in the Sultanate of Oman. However, there is limited research on these programmes from the perspectives of Ministry of Higher Education officials, managers, academics and students of private higher education institutions. This study uses gap analysis to explore these key stakeholders’ expectations and perceptions of the quality of offshore programmes in Oman. It reveals that dependence on International University Partners to monitor their offshore programmes compromises quality because local academics have limited involvement in developing programmes and students get limited feedback on their coursework. Students’ lack of English proficiency also makes them struggle with offshore programme requirements. Key stakeholders argue for embedding Omani cultural values in the offshore programmes without compromising their academic qualification credentials and enhancing students’ learning experience to become global citizens.","PeriodicalId":46354,"journal":{"name":"Quality in Higher Education","volume":null,"pages":null},"PeriodicalIF":1.4,"publicationDate":"2022-02-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87739599","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-01-02DOI: 10.1080/13538322.2021.1951454
T. Leiber
ABSTRACT The paper characterises the learning and teaching process with a focus on higher education and describes the basics of contemporary theories of learning and teaching. Against this conceptual background, the interweaving of performance indicators and theories and practice of learning and teaching is analysed. For a small, selected number of exemplary performance indicators, it is shown that they are justified by theories and practice of learning and teaching. The justification link between indicators and theories of learning and teaching is usually not a straightforward relation but a complicated material inference that is multifactorial on both sides, the premises and the conclusions of the inference. The analysis contributes to improve the understanding of the epistemological justification of performance indicators of learning and teaching.
{"title":"Justifying, contextualising and operationalising performance indicators of learning and teaching: the role of theories and practice of learning and teaching","authors":"T. Leiber","doi":"10.1080/13538322.2021.1951454","DOIUrl":"https://doi.org/10.1080/13538322.2021.1951454","url":null,"abstract":"ABSTRACT The paper characterises the learning and teaching process with a focus on higher education and describes the basics of contemporary theories of learning and teaching. Against this conceptual background, the interweaving of performance indicators and theories and practice of learning and teaching is analysed. For a small, selected number of exemplary performance indicators, it is shown that they are justified by theories and practice of learning and teaching. The justification link between indicators and theories of learning and teaching is usually not a straightforward relation but a complicated material inference that is multifactorial on both sides, the premises and the conclusions of the inference. The analysis contributes to improve the understanding of the epistemological justification of performance indicators of learning and teaching.","PeriodicalId":46354,"journal":{"name":"Quality in Higher Education","volume":null,"pages":null},"PeriodicalIF":1.4,"publicationDate":"2022-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78711886","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-01-02DOI: 10.1080/13538322.2021.1951445
C. S. Sarrico
ABSTRACT Performance indicators are increasingly used to measure the performance and quality of higher education. The purpose of this article is to discuss their role for reflecting on the challenges faced by high-participation higher education systems, regarding quality of student outcomes, equity of access, societal relevance and financial sustainability. Based on a review of existing international comparable metrics, policy and scholarly literature on higher education performance, the article discusses the strengths and weaknesses of current performance indicators and the perennial tension between the burden of accountability and the inspiration and innovation that may result from the developmental use of performance indicators for improvement. It concludes by summarising some observable results of performance and quality management and reflecting on some possible future trajectories.
{"title":"Quality management, performance measurement and indicators in higher education institutions: between burden, inspiration and innovation","authors":"C. S. Sarrico","doi":"10.1080/13538322.2021.1951445","DOIUrl":"https://doi.org/10.1080/13538322.2021.1951445","url":null,"abstract":"ABSTRACT Performance indicators are increasingly used to measure the performance and quality of higher education. The purpose of this article is to discuss their role for reflecting on the challenges faced by high-participation higher education systems, regarding quality of student outcomes, equity of access, societal relevance and financial sustainability. Based on a review of existing international comparable metrics, policy and scholarly literature on higher education performance, the article discusses the strengths and weaknesses of current performance indicators and the perennial tension between the burden of accountability and the inspiration and innovation that may result from the developmental use of performance indicators for improvement. It concludes by summarising some observable results of performance and quality management and reflecting on some possible future trajectories.","PeriodicalId":46354,"journal":{"name":"Quality in Higher Education","volume":null,"pages":null},"PeriodicalIF":1.4,"publicationDate":"2022-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89520269","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-01-02DOI: 10.1080/13538322.2021.1951455
M. J. Rosa, James Williams, Joke Claeys, David Kane, S. Bruckmann, Daniela Costa, J. A. Rafael
ABSTRACT Drawn from the SQELT Erasmus+ project, this article explores how learning analytics is implemented at a set of six European universities in the context of their performance data management models, including its multiple functions and ethical issues. It further identifies possible good practice and policy recommendations at decision-making level. Results show that learning analytics is present to a certain extent in all six institutions, although mostly based on traditional data and still lacking predictive capacity concerning students’ performance. Learning analytics is viewed as useful in providing more accurate personal data on students’ learning, contributing to the establishment of more sophisticated quality management systems. The European General Data Protection Regulation and national privacy laws sufficiently cover the majority of data ethics risks posed by learning analytics. Overall, learning analytics entails both opportunities and threats. The possibilities of a learning analytics approach deserve further attention within universities and quality assurance agencies.
{"title":"Learning analytics and data ethics in performance data management: a benchlearning exercise involving six European universities","authors":"M. J. Rosa, James Williams, Joke Claeys, David Kane, S. Bruckmann, Daniela Costa, J. A. Rafael","doi":"10.1080/13538322.2021.1951455","DOIUrl":"https://doi.org/10.1080/13538322.2021.1951455","url":null,"abstract":"ABSTRACT Drawn from the SQELT Erasmus+ project, this article explores how learning analytics is implemented at a set of six European universities in the context of their performance data management models, including its multiple functions and ethical issues. It further identifies possible good practice and policy recommendations at decision-making level. Results show that learning analytics is present to a certain extent in all six institutions, although mostly based on traditional data and still lacking predictive capacity concerning students’ performance. Learning analytics is viewed as useful in providing more accurate personal data on students’ learning, contributing to the establishment of more sophisticated quality management systems. The European General Data Protection Regulation and national privacy laws sufficiently cover the majority of data ethics risks posed by learning analytics. Overall, learning analytics entails both opportunities and threats. The possibilities of a learning analytics approach deserve further attention within universities and quality assurance agencies.","PeriodicalId":46354,"journal":{"name":"Quality in Higher Education","volume":null,"pages":null},"PeriodicalIF":1.4,"publicationDate":"2022-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78250366","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}