Pub Date : 2023-10-13DOI: 10.1080/02602938.2023.2266862
Richard O’Donovan
Student evaluations of teaching (SETs) feature prominently in higher education and can impact an academic’s career. As a result, they have attracted considerable research attention in order to identify evidence of bias and the influence of factors beyond an educator’s control. This study investigates the influence of seven factors on a large dataset of student evaluations (N = 376,805) of academics teaching at an Australian university. Students were invited to rate their experience at the end of each teaching period using an online survey instrument. The following factors are analysed comparing means between relevant groups to verify if: i) SET is dominated by students with strong feelings; ii) revenge reviews are given by angry students; iii) larger units are rated lower than smaller units; iv) different expectations/ratings are given by students of different gender and backgrounds; v) reticence of international students lowers overall ratings; vi) bigoted students skew results for some staff; and, vii) SET surveys during examinations disadvantaging academics teaching units with examinations. Overall, while statistically significant differences were found, they represented only small or trivial effects, with medium effects in only two limited cases. The results highlight the importance of explicitly reporting effect size of statistically significant results, and the benefits of representing differences visually in ways that avoid over-emphasising differences.
{"title":"Missing the forest for the trees: investigating factors influencing student evaluations of teaching","authors":"Richard O’Donovan","doi":"10.1080/02602938.2023.2266862","DOIUrl":"https://doi.org/10.1080/02602938.2023.2266862","url":null,"abstract":"Student evaluations of teaching (SETs) feature prominently in higher education and can impact an academic’s career. As a result, they have attracted considerable research attention in order to identify evidence of bias and the influence of factors beyond an educator’s control. This study investigates the influence of seven factors on a large dataset of student evaluations (N = 376,805) of academics teaching at an Australian university. Students were invited to rate their experience at the end of each teaching period using an online survey instrument. The following factors are analysed comparing means between relevant groups to verify if: i) SET is dominated by students with strong feelings; ii) revenge reviews are given by angry students; iii) larger units are rated lower than smaller units; iv) different expectations/ratings are given by students of different gender and backgrounds; v) reticence of international students lowers overall ratings; vi) bigoted students skew results for some staff; and, vii) SET surveys during examinations disadvantaging academics teaching units with examinations. Overall, while statistically significant differences were found, they represented only small or trivial effects, with medium effects in only two limited cases. The results highlight the importance of explicitly reporting effect size of statistically significant results, and the benefits of representing differences visually in ways that avoid over-emphasising differences.","PeriodicalId":48267,"journal":{"name":"Assessment & Evaluation in Higher Education","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135855081","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-08DOI: 10.1080/02602938.2023.2266154
Patricia Simon, Juming Jiang, Luke K. Fryer
AbstractLearning management systems (LMSs) have facilitated access to courses beyond conventional classroom environments via distance and asynchronous education. Although numerous studies have examined LMS usage in higher education institutions, review of scales measuring the LMS experience of both students and teachers remains scarce. This scoping review aimed to identify scales assessing student and teacher experiences with LMS, along with the attributes of studies employing these scales. The systematic search encompassed five databases, ultimately incorporating 79 of 5536 peer-reviewed articles in the final review. Findings revealed that included studies predominantly focused on student samples, with fewer examining teacher samples and even fewer involving both stakeholders. The majority of included studies created their own measurement, and over half of the newly created measurements were combined with constructs that were extracted from multiple theories. The System Usability Scale is the only measurement that has been used in multiple studies. The Technology Acceptance Model (TAM) and DeLone and McLean’s Information Success (IS) model emerged as the most frequently employed frameworks for investigating factors influencing LMS utilization. Moodle ranked as the most commonly assessed LMS within the reviewed studies. Based on this data, recommendations for future LMS research are discussed.Keywords: Distance educationonline learningpost-secondary educationevaluation methodologies AcknowledgementThe authors would like to thank the help and support from Miss. Yijin Li and Miss. Ying Su. This work could not be accomplished without their tireless hard work.Disclosure statementThere is no conflict of interest to declare.Notes on contributorsPatricia D. Simon is a postdoctoral fellow at the University of Hong Kong. Her research interests include the promotion of students’ mental health, well-being and engagement in both physical and virtual classrooms. She is also interested in applying psychological principles for the improvement of educational technologies and for the promotion of environmental sustainability, health, and well-being.Juming Jiang is a post-doctoral fellow at the University of Hong Kong, Hong Kong. His research programme focuses on how to support students’ learning motivation and interest in offline and online learning environments, with extended reality (i.e., virtual/augmented/mixed reality) and artificial intelligence technologies.Luke K. Fryer is an Associate Professor at the University of Hong Kong, Hong Kong. His research programme addresses motivations to learn, learning strategies and teaching-learning on/off-line.
摘要学习管理系统(lms)通过远程和异步教育促进了传统课堂环境之外的课程访问。尽管有许多研究调查了高等教育机构中LMS的使用情况,但对衡量学生和教师LMS体验的量表的审查仍然很少。本综述旨在确定评估学生和教师使用LMS经验的量表,以及使用这些量表的研究的属性。系统搜索包括5个数据库,最终将5536篇同行评议文章中的79篇纳入最终评审。调查结果显示,纳入的研究主要集中在学生样本上,较少检查教师样本,甚至更少涉及两个利益相关者。大多数纳入的研究创建了自己的测量方法,超过一半的新创建的测量方法与从多种理论中提取的结构相结合。系统可用性量表是唯一在多个研究中使用的测量方法。技术接受模型(TAM)和DeLone和McLean的信息成功模型(IS)是研究影响LMS利用的因素时最常用的框架。在回顾的研究中,Moodle被评为最常被评估的LMS。在此基础上,对未来LMS研究提出了建议。关键词:远程教育在线学习专上教育评价方法感谢李翊锦小姐和苏颖小姐的帮助和支持,这项工作的完成离不开她们的不懈努力。披露声明没有利益冲突需要申报。作者简介:patricia D. Simon是香港大学的博士后。她的研究兴趣包括促进学生的心理健康,福祉和参与物理和虚拟教室。她还对应用心理学原理改善教育技术和促进环境可持续性、健康和福祉感兴趣。蒋巨明,香港大学博士后研究员。他的研究项目侧重于如何通过扩展现实(即虚拟/增强/混合现实)和人工智能技术,在线下和在线学习环境中支持学生的学习动机和兴趣。Luke K. Fryer,香港大学副教授。他的研究项目涉及学习动机、学习策略和线上/线下教学。
{"title":"Measurement of higher education students’ and teachers’ experiences in learning management systems: a scoping review","authors":"Patricia Simon, Juming Jiang, Luke K. Fryer","doi":"10.1080/02602938.2023.2266154","DOIUrl":"https://doi.org/10.1080/02602938.2023.2266154","url":null,"abstract":"AbstractLearning management systems (LMSs) have facilitated access to courses beyond conventional classroom environments via distance and asynchronous education. Although numerous studies have examined LMS usage in higher education institutions, review of scales measuring the LMS experience of both students and teachers remains scarce. This scoping review aimed to identify scales assessing student and teacher experiences with LMS, along with the attributes of studies employing these scales. The systematic search encompassed five databases, ultimately incorporating 79 of 5536 peer-reviewed articles in the final review. Findings revealed that included studies predominantly focused on student samples, with fewer examining teacher samples and even fewer involving both stakeholders. The majority of included studies created their own measurement, and over half of the newly created measurements were combined with constructs that were extracted from multiple theories. The System Usability Scale is the only measurement that has been used in multiple studies. The Technology Acceptance Model (TAM) and DeLone and McLean’s Information Success (IS) model emerged as the most frequently employed frameworks for investigating factors influencing LMS utilization. Moodle ranked as the most commonly assessed LMS within the reviewed studies. Based on this data, recommendations for future LMS research are discussed.Keywords: Distance educationonline learningpost-secondary educationevaluation methodologies AcknowledgementThe authors would like to thank the help and support from Miss. Yijin Li and Miss. Ying Su. This work could not be accomplished without their tireless hard work.Disclosure statementThere is no conflict of interest to declare.Notes on contributorsPatricia D. Simon is a postdoctoral fellow at the University of Hong Kong. Her research interests include the promotion of students’ mental health, well-being and engagement in both physical and virtual classrooms. She is also interested in applying psychological principles for the improvement of educational technologies and for the promotion of environmental sustainability, health, and well-being.Juming Jiang is a post-doctoral fellow at the University of Hong Kong, Hong Kong. His research programme focuses on how to support students’ learning motivation and interest in offline and online learning environments, with extended reality (i.e., virtual/augmented/mixed reality) and artificial intelligence technologies.Luke K. Fryer is an Associate Professor at the University of Hong Kong, Hong Kong. His research programme addresses motivations to learn, learning strategies and teaching-learning on/off-line.","PeriodicalId":48267,"journal":{"name":"Assessment & Evaluation in Higher Education","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135198819","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-08DOI: 10.1080/02602938.2023.2265080
David Nicol, Lovleen Kushwah
In higher education, there is a tension between teachers providing comments to students about their work and students developing agency in producing that work. Most proposals to address this tension assume a dialogic conception of feedback where students take more agency in eliciting and responding to others’ advice, recently framed as developing their feedback literacy. This conception does not however acknowledge the feedback agency students exercise implicitly during learning, through interactions with resources (e.g. textbooks, videos). This study therefore adopted a different framing - that all feedback is internally generated by students through comparing their work against different sources of reference information, human and material; and that agency is increased when these comparisons are made explicit. Students produced a literature review, compared it against information in two published reviews, and wrote their own self-feedback comments. The small sample size enabled detailed analysis of these comments and of students’ experiences in producing them. Results show students can generate significant self-feedback by making resource comparisons, that this feedback can replace or complement teacher feedback, be activated when required and help students fine-tune feedback requests to teachers. This widely applicable methodology strengthens students’ natural capacity for agency and makes dialogic feedback more effective.
{"title":"Shifting feedback agency to students by having them write their own feedback comments","authors":"David Nicol, Lovleen Kushwah","doi":"10.1080/02602938.2023.2265080","DOIUrl":"https://doi.org/10.1080/02602938.2023.2265080","url":null,"abstract":"In higher education, there is a tension between teachers providing comments to students about their work and students developing agency in producing that work. Most proposals to address this tension assume a dialogic conception of feedback where students take more agency in eliciting and responding to others’ advice, recently framed as developing their feedback literacy. This conception does not however acknowledge the feedback agency students exercise implicitly during learning, through interactions with resources (e.g. textbooks, videos). This study therefore adopted a different framing - that all feedback is internally generated by students through comparing their work against different sources of reference information, human and material; and that agency is increased when these comparisons are made explicit. Students produced a literature review, compared it against information in two published reviews, and wrote their own self-feedback comments. The small sample size enabled detailed analysis of these comments and of students’ experiences in producing them. Results show students can generate significant self-feedback by making resource comparisons, that this feedback can replace or complement teacher feedback, be activated when required and help students fine-tune feedback requests to teachers. This widely applicable methodology strengthens students’ natural capacity for agency and makes dialogic feedback more effective.","PeriodicalId":48267,"journal":{"name":"Assessment & Evaluation in Higher Education","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135251573","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-04DOI: 10.1080/02602938.2023.2259631
Alexander Amigud, Samira Hosseini
AbstractThis study explores the social nature of learning and discusses its implications for student assessment. To this end, we analyzed a sample of unique first-hand accounts of students seeking help with academic work, relying on the grounded theory approach for the identification of incentives for academic support (n = 807), and used time-series analysis (n = 5,637) to identify temporal trends. Our findings demonstrate an overlap in collaboration, collusion, and contact cheating practices and highlight a trade element in peer-relationships. In contrast to outsourcing of academic work to commercial providers, whereby academic support is exchanged for money, students’ tend to trade what they have available. The incentives offered in exchange for academic support included food, personal attention, money, alcohol, personal items, and sexual opportunities. The top subjects students sought help with were mathematics, history, and English. When examined on a timeline (2018–2023), the help-seeking behaviors persisted throughout the pandemic-related lockdowns; however, there was a shift toward monetary transactions. We argue that peer community can be considered an economy. Transacting with peers is more accessible, more affordable, and less risky than transacting with commercial providers. Furthermore, when students are partially involved in the production of academic work, it becomes harder to detect.Keywords: Student assessmentpeer supportcontract cheatingcollusionhelp-seekingsocial networks Disclosure statementNo potential conflict of interest was reported by the author(s).
{"title":"Collaboration, collusion, and barter-cheating: an analysis of academic help-seeking behaviors","authors":"Alexander Amigud, Samira Hosseini","doi":"10.1080/02602938.2023.2259631","DOIUrl":"https://doi.org/10.1080/02602938.2023.2259631","url":null,"abstract":"AbstractThis study explores the social nature of learning and discusses its implications for student assessment. To this end, we analyzed a sample of unique first-hand accounts of students seeking help with academic work, relying on the grounded theory approach for the identification of incentives for academic support (n = 807), and used time-series analysis (n = 5,637) to identify temporal trends. Our findings demonstrate an overlap in collaboration, collusion, and contact cheating practices and highlight a trade element in peer-relationships. In contrast to outsourcing of academic work to commercial providers, whereby academic support is exchanged for money, students’ tend to trade what they have available. The incentives offered in exchange for academic support included food, personal attention, money, alcohol, personal items, and sexual opportunities. The top subjects students sought help with were mathematics, history, and English. When examined on a timeline (2018–2023), the help-seeking behaviors persisted throughout the pandemic-related lockdowns; however, there was a shift toward monetary transactions. We argue that peer community can be considered an economy. Transacting with peers is more accessible, more affordable, and less risky than transacting with commercial providers. Furthermore, when students are partially involved in the production of academic work, it becomes harder to detect.Keywords: Student assessmentpeer supportcontract cheatingcollusionhelp-seekingsocial networks Disclosure statementNo potential conflict of interest was reported by the author(s).","PeriodicalId":48267,"journal":{"name":"Assessment & Evaluation in Higher Education","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135591261","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-02DOI: 10.1080/02602938.2023.2263668
Mairi Cowan
AbstractResearch on flexible assessment suggests that providing students with choice in assignments can increase motivation and deepen investment in learning. Although instructors are often advised to adopt flexible assessment, they are also warned about potential detriments such as perceived lack of rigour among colleagues, the stress that decision-making can bring to students, and increased workload for themselves. This paper draws upon student responses to a survey, a class discussion, and instructor observations to identify benefits and costs of flexible assessment in a fourth-year history course. Among the benefits are that students can pursue their interests more freely in both content and form, while the instructor can enjoy creative and original student work. The costs include anxiety among students who may be unsure how best to choose their assessments, and additional work for the instructor who must manage a multiplicity of assignments within the confines of an institutional grading system. The implementation of flexible assessment is recommended provided that the flexibility is compatible with the course’s learning outcomes, the students’ level of independence, and the instructor’s capacity to take on an unpredictable amount of extra work. Suggestions are offered for how to implement flexible assessment without creating too much of a burden for either students or instructors.Keywords: Flexible assessmentchoicemotivationworkloadhistory AcknowledgmentsThe author would like to thank colleagues and students at the University of Toronto Mississauga. In particular, the author is grateful for the encouragement and guidance of professors Sanja Hinić-Frlog, Nicole Laliberté, and Fiona Rawle, who helped develop this version of flexible assessment, and the students in HIS409, who remained open and generous in sharing their thoughts throughout the experiment.Disclosure statementNo potential conflict of interest was reported by the author.
{"title":"Flexible assessment: some benefits and costs for students and instructors","authors":"Mairi Cowan","doi":"10.1080/02602938.2023.2263668","DOIUrl":"https://doi.org/10.1080/02602938.2023.2263668","url":null,"abstract":"AbstractResearch on flexible assessment suggests that providing students with choice in assignments can increase motivation and deepen investment in learning. Although instructors are often advised to adopt flexible assessment, they are also warned about potential detriments such as perceived lack of rigour among colleagues, the stress that decision-making can bring to students, and increased workload for themselves. This paper draws upon student responses to a survey, a class discussion, and instructor observations to identify benefits and costs of flexible assessment in a fourth-year history course. Among the benefits are that students can pursue their interests more freely in both content and form, while the instructor can enjoy creative and original student work. The costs include anxiety among students who may be unsure how best to choose their assessments, and additional work for the instructor who must manage a multiplicity of assignments within the confines of an institutional grading system. The implementation of flexible assessment is recommended provided that the flexibility is compatible with the course’s learning outcomes, the students’ level of independence, and the instructor’s capacity to take on an unpredictable amount of extra work. Suggestions are offered for how to implement flexible assessment without creating too much of a burden for either students or instructors.Keywords: Flexible assessmentchoicemotivationworkloadhistory AcknowledgmentsThe author would like to thank colleagues and students at the University of Toronto Mississauga. In particular, the author is grateful for the encouragement and guidance of professors Sanja Hinić-Frlog, Nicole Laliberté, and Fiona Rawle, who helped develop this version of flexible assessment, and the students in HIS409, who remained open and generous in sharing their thoughts throughout the experiment.Disclosure statementNo potential conflict of interest was reported by the author.","PeriodicalId":48267,"journal":{"name":"Assessment & Evaluation in Higher Education","volume":"102 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135895860","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-09-21DOI: 10.1080/02602938.2023.2259632
Juuso Henrik Nieminen, Sarah Elaine Eaton
Assessment accommodations are used globally in higher education systems to ensure that students with disabilities can participate fairly in assessment. Even though assessment accommodations are supposed to promote access, not success, they are commonly portrayed as potentially being cheating in that they provide certain students with unfair advantages. This may lead students to avoid applying for accommodations for fear of being labelled ‘cheaters’. Various security practices are often implemented within assessment accommodation processes to detect and prevent cheating and malingering. However, there remains a lack of theoretical understanding of the discursive interconnections between assessment accommodations and assessment security. In this study, we conduct a critical policy analysis to unpack how Canadian assessment accommodation policies have problematised assessment accommodations as a potential site for cheating. We show that Canadian universities use considerable resources to prevent cheating as accommodations are administered. In doing so, they portray students with disabilities as potential cheaters. We situate these policies in the wider societal context of the ‘fear of the disability con’, which perpetuates discrimination towards people with disabilities. We argue that assessment accommodation policies belong to the realm of assessment security rather than integrity and may thus fail to promote equity and inclusion.
{"title":"Are assessment accommodations cheating? A critical policy analysis","authors":"Juuso Henrik Nieminen, Sarah Elaine Eaton","doi":"10.1080/02602938.2023.2259632","DOIUrl":"https://doi.org/10.1080/02602938.2023.2259632","url":null,"abstract":"Assessment accommodations are used globally in higher education systems to ensure that students with disabilities can participate fairly in assessment. Even though assessment accommodations are supposed to promote access, not success, they are commonly portrayed as potentially being cheating in that they provide certain students with unfair advantages. This may lead students to avoid applying for accommodations for fear of being labelled ‘cheaters’. Various security practices are often implemented within assessment accommodation processes to detect and prevent cheating and malingering. However, there remains a lack of theoretical understanding of the discursive interconnections between assessment accommodations and assessment security. In this study, we conduct a critical policy analysis to unpack how Canadian assessment accommodation policies have problematised assessment accommodations as a potential site for cheating. We show that Canadian universities use considerable resources to prevent cheating as accommodations are administered. In doing so, they portray students with disabilities as potential cheaters. We situate these policies in the wider societal context of the ‘fear of the disability con’, which perpetuates discrimination towards people with disabilities. We argue that assessment accommodation policies belong to the realm of assessment security rather than integrity and may thus fail to promote equity and inclusion.","PeriodicalId":48267,"journal":{"name":"Assessment & Evaluation in Higher Education","volume":"79 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136135734","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-04-03DOI: 10.1080/02602938.2023.2181601
Malcolm Tight
{"title":"Ten years of editing Assessment and Evaluation in Higher Education","authors":"Malcolm Tight","doi":"10.1080/02602938.2023.2181601","DOIUrl":"https://doi.org/10.1080/02602938.2023.2181601","url":null,"abstract":"","PeriodicalId":48267,"journal":{"name":"Assessment & Evaluation in Higher Education","volume":"48 1","pages":"259 - 261"},"PeriodicalIF":4.4,"publicationDate":"2023-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43859974","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-02-21DOI: 10.1080/02602938.2023.2180484
Edmund Cannon, Giam Pietro Cipriani
In Cannon and Cipriani (Citation2022) we contributed to the literature on halo effects in student evaluations of teaching (SETs) by proposing and implementing a method to separate the effect of halo effects in student responses from an external measure of the item being assessed. Our paper has been criticised by Michela (Citation2022). Many of his comments about problems with SETs are not directly relevant as they discuss issues other than halo. We re-visit our data and confirm that our conclusion that halo does not necessarily make SETs uninformative is correct. However, we do find heterogeneity in the importance of halo between SETs from two different campuses.
{"title":"Quantifying halo effects in students’ evaluation of teaching: a response to Michela","authors":"Edmund Cannon, Giam Pietro Cipriani","doi":"10.1080/02602938.2023.2180484","DOIUrl":"https://doi.org/10.1080/02602938.2023.2180484","url":null,"abstract":"In Cannon and Cipriani (Citation2022) we contributed to the literature on halo effects in student evaluations of teaching (SETs) by proposing and implementing a method to separate the effect of halo effects in student responses from an external measure of the item being assessed. Our paper has been criticised by Michela (Citation2022). Many of his comments about problems with SETs are not directly relevant as they discuss issues other than halo. We re-visit our data and confirm that our conclusion that halo does not necessarily make SETs uninformative is correct. However, we do find heterogeneity in the importance of halo between SETs from two different campuses.","PeriodicalId":48267,"journal":{"name":"Assessment & Evaluation in Higher Education","volume":"529 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-02-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136389491","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-02DOI: 10.1080/02602938.2022.2161999
Jaci Mason, L. Roberts
Abstract Consensus moderation, where collaboration and discussion take place to reach an agreement on mark allocation, is a frequently used approach to quality assurance in higher education. This study explored expert academics’ perceptions of consensus moderation through 12 semi-structured open-ended interviews. Data were analysed using thematic analysis and resulted in six themes: accept that marking is subjective; consensus moderation is a learning process; use calibration to develop and maintain standards; moderation is core academic work; resources are needed to enable consensus moderation; and different moderation practices are needed for different moderation purposes. Consensus moderation is a complex activity with many challenges, and the findings from this study contribute to our current understanding of consensus moderation. The findings have implications for policy and practice, and have identified ways in which we can enhance consensus moderation practice.
{"title":"Consensus moderation: the voices of expert academics","authors":"Jaci Mason, L. Roberts","doi":"10.1080/02602938.2022.2161999","DOIUrl":"https://doi.org/10.1080/02602938.2022.2161999","url":null,"abstract":"Abstract Consensus moderation, where collaboration and discussion take place to reach an agreement on mark allocation, is a frequently used approach to quality assurance in higher education. This study explored expert academics’ perceptions of consensus moderation through 12 semi-structured open-ended interviews. Data were analysed using thematic analysis and resulted in six themes: accept that marking is subjective; consensus moderation is a learning process; use calibration to develop and maintain standards; moderation is core academic work; resources are needed to enable consensus moderation; and different moderation practices are needed for different moderation purposes. Consensus moderation is a complex activity with many challenges, and the findings from this study contribute to our current understanding of consensus moderation. The findings have implications for policy and practice, and have identified ways in which we can enhance consensus moderation practice.","PeriodicalId":48267,"journal":{"name":"Assessment & Evaluation in Higher Education","volume":"25 1","pages":"926 - 937"},"PeriodicalIF":4.4,"publicationDate":"2023-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86660420","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-02DOI: 10.1080/02602938.2022.2052799
M. Ferrão
Abstract Degree completion on theoretical time is a phenomenon seldom explored in the higher education literature. We applied variance components models and random coefficients models to the microdata of an entire entrant cohort of first-time, full-time undergraduate students who completed their three-year programme at a Portuguese institution during the theoretical period. The study showed that the variance partition coefficient is 0.27, considering the hierarchical structure of students nested in programmes. The differential effect of students’ university entrance scores on degree completion grade point average is stronger across programmes than across faculties, controlling for students’ sociodemographic background (gender, age and parents’ level of education), social scholarship granted, and preference regarding the institution and programme attended. The fixed effects related to the areas of study and type of institution (e.g. university or polytechnic) were also quantified. The estimates indicated that secondary school preparation is the most important predictive factor for the final grade point average of degree completion among the variables at enrolment. Moreover, differences based on gender, age, and areas of study were found.
{"title":"Differential effect of university entrance scores on graduates’ performance: the case of degree completion on time in Portugal","authors":"M. Ferrão","doi":"10.1080/02602938.2022.2052799","DOIUrl":"https://doi.org/10.1080/02602938.2022.2052799","url":null,"abstract":"Abstract Degree completion on theoretical time is a phenomenon seldom explored in the higher education literature. We applied variance components models and random coefficients models to the microdata of an entire entrant cohort of first-time, full-time undergraduate students who completed their three-year programme at a Portuguese institution during the theoretical period. The study showed that the variance partition coefficient is 0.27, considering the hierarchical structure of students nested in programmes. The differential effect of students’ university entrance scores on degree completion grade point average is stronger across programmes than across faculties, controlling for students’ sociodemographic background (gender, age and parents’ level of education), social scholarship granted, and preference regarding the institution and programme attended. The fixed effects related to the areas of study and type of institution (e.g. university or polytechnic) were also quantified. The estimates indicated that secondary school preparation is the most important predictive factor for the final grade point average of degree completion among the variables at enrolment. Moreover, differences based on gender, age, and areas of study were found.","PeriodicalId":48267,"journal":{"name":"Assessment & Evaluation in Higher Education","volume":"48 1","pages":"95 - 106"},"PeriodicalIF":4.4,"publicationDate":"2023-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48628024","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}