Pub Date : 2022-08-18DOI: 10.1080/02602938.2022.2112653
David Corradi
Abstract Juries are a high-stake practice in higher education to assess complex competencies. However common, research remains behind in detailing the psychometric qualities of juries, especially when using rubrics or rating scales as an assessment tool. In this study, I analyze a case of a jury assessment (N = 191) of product development where both internal teaching staff and external judges assess and fill in an analytic rating scale. Using polytomous item response theory (IRT) analysis developed for the analysis of heterogeneous juries (i.e. jury response theory or JRT), this study attempts to provide insight into the validity and reliability of the used assessment tool. The results indicate that JRT helps detect unreliable response patterns that indicate an excellence bias, i.e. a tendency not to score in the highest response category. This article concludes with a discussion on how to counter such bias when using rating scales or rubrics for summative assessment.
{"title":"Excellence bias related to rating scales with summative jury assessment","authors":"David Corradi","doi":"10.1080/02602938.2022.2112653","DOIUrl":"https://doi.org/10.1080/02602938.2022.2112653","url":null,"abstract":"Abstract Juries are a high-stake practice in higher education to assess complex competencies. However common, research remains behind in detailing the psychometric qualities of juries, especially when using rubrics or rating scales as an assessment tool. In this study, I analyze a case of a jury assessment (N = 191) of product development where both internal teaching staff and external judges assess and fill in an analytic rating scale. Using polytomous item response theory (IRT) analysis developed for the analysis of heterogeneous juries (i.e. jury response theory or JRT), this study attempts to provide insight into the validity and reliability of the used assessment tool. The results indicate that JRT helps detect unreliable response patterns that indicate an excellence bias, i.e. a tendency not to score in the highest response category. This article concludes with a discussion on how to counter such bias when using rating scales or rubrics for summative assessment.","PeriodicalId":48267,"journal":{"name":"Assessment & Evaluation in Higher Education","volume":"48 1","pages":"627 - 641"},"PeriodicalIF":4.4,"publicationDate":"2022-08-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46150844","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-08-16DOI: 10.1080/02602938.2022.2100875
L. Baartman, Hanneke Baukema, F. Prins
Abstract In response to dissatisfaction with testing cultures in higher education, programmatic assessment has been introduced as an alternative approach. Programmatic assessment involves the longitudinal collection of data points about student learning, aimed at continuous monitoring and feedback. High-stakes decisions are based on a multitude of data points, involving aggregation, saturation and group-decision making. Evidence about the value of programmatic assessment is emerging in health sciences education. However, research also shows that students find it difficult to take an active role in the assessment process and seek feedback. Lower performing students are underrepresented in research on programmatic assessment, which until now mainly focuses on health sciences education. This study therefore explored low and high performing students’ experiences with learning and decision-making in programmatic assessment in relation to their feedback-seeking behaviour in a Communication Sciences program. In total, 55 students filled out a questionnaire about their perceptions of programmatic assessment, their feedback-seeking behaviour and learning performance. Low-performing and high-performing students were selected and interviewed. Several designable elements of programmatic assessment were distinguished that promote or hinder students’ feedback-seeking behaviour, learning and uptake of feedback.
{"title":"Exploring students’ feedback seeking behavior in the context of programmatic assessment","authors":"L. Baartman, Hanneke Baukema, F. Prins","doi":"10.1080/02602938.2022.2100875","DOIUrl":"https://doi.org/10.1080/02602938.2022.2100875","url":null,"abstract":"Abstract In response to dissatisfaction with testing cultures in higher education, programmatic assessment has been introduced as an alternative approach. Programmatic assessment involves the longitudinal collection of data points about student learning, aimed at continuous monitoring and feedback. High-stakes decisions are based on a multitude of data points, involving aggregation, saturation and group-decision making. Evidence about the value of programmatic assessment is emerging in health sciences education. However, research also shows that students find it difficult to take an active role in the assessment process and seek feedback. Lower performing students are underrepresented in research on programmatic assessment, which until now mainly focuses on health sciences education. This study therefore explored low and high performing students’ experiences with learning and decision-making in programmatic assessment in relation to their feedback-seeking behaviour in a Communication Sciences program. In total, 55 students filled out a questionnaire about their perceptions of programmatic assessment, their feedback-seeking behaviour and learning performance. Low-performing and high-performing students were selected and interviewed. Several designable elements of programmatic assessment were distinguished that promote or hinder students’ feedback-seeking behaviour, learning and uptake of feedback.","PeriodicalId":48267,"journal":{"name":"Assessment & Evaluation in Higher Education","volume":"48 1","pages":"598 - 612"},"PeriodicalIF":4.4,"publicationDate":"2022-08-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49654300","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-08-10DOI: 10.1080/02602938.2022.2107167
C. Culver
Abstract When students engage in peer assessment activities, they often put emphasis on the feedback they receive from peers but fail to appreciate how their role as a peer assessor can contribute to their learning process and improve their own work. Because of this, students and sometimes teachers undervalue the peer assessment process. This scholarship of teaching and learning project conducts a small-scale controlled experiment with students undertaking peer assessment in randomly assigned groups that either focus on giving and receiving peer feedback or assessing peers’ work only without receiving feedback on their own. In addition, it explores how different peer assessment strategies such as rubric creation, rank order assessment and assessment without qualitative feedback affect both students’ ability to improve their work and their perception of the value of peer assessment. Consistent with theoretical expectations, the results provide exploratory evidence that students’ perceived value of peer assessment is lower when they do not receive feedback, but improvement in their writing is actually higher when they focus on assessing peers’ work rather than receiving feedback on their own. While feedback is a potential benefit of the peer assessment process, it may also distract focus from the potentially more valuable learning that derives from students’ self-evaluating their own work after critically assessing their peers’.
{"title":"Learning as a peer assessor: evaluating peer-assessment strategies","authors":"C. Culver","doi":"10.1080/02602938.2022.2107167","DOIUrl":"https://doi.org/10.1080/02602938.2022.2107167","url":null,"abstract":"Abstract When students engage in peer assessment activities, they often put emphasis on the feedback they receive from peers but fail to appreciate how their role as a peer assessor can contribute to their learning process and improve their own work. Because of this, students and sometimes teachers undervalue the peer assessment process. This scholarship of teaching and learning project conducts a small-scale controlled experiment with students undertaking peer assessment in randomly assigned groups that either focus on giving and receiving peer feedback or assessing peers’ work only without receiving feedback on their own. In addition, it explores how different peer assessment strategies such as rubric creation, rank order assessment and assessment without qualitative feedback affect both students’ ability to improve their work and their perception of the value of peer assessment. Consistent with theoretical expectations, the results provide exploratory evidence that students’ perceived value of peer assessment is lower when they do not receive feedback, but improvement in their writing is actually higher when they focus on assessing peers’ work rather than receiving feedback on their own. While feedback is a potential benefit of the peer assessment process, it may also distract focus from the potentially more valuable learning that derives from students’ self-evaluating their own work after critically assessing their peers’.","PeriodicalId":48267,"journal":{"name":"Assessment & Evaluation in Higher Education","volume":"48 1","pages":"581 - 597"},"PeriodicalIF":4.4,"publicationDate":"2022-08-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44434756","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-08-09DOI: 10.1080/02602938.2022.2107999
Richard Harris, P. Blundell-Birtill, Madeleine Pownall
Abstract National student surveys reveal that feedback is an aspect of the education experience with which students are less satisfied. One method to increase student engagement with their written feedback and to improve feedback literacy is promotion of critical self-reflection on the feedback content. We describe two interventions aimed at improving students’ reflective practices with their feedback. In a School of Psychology at a UK research-intensive university, we designed, implemented and evaluated two interventions to improve students’ reflection on, and engagement with, their feedback. The first intervention was a feedback seminar, which comprised a modified version of the Developing Engagement with Feedback Toolkit, adapted for our context and online delivery. The second intervention was an interactive assessment coversheet that was designed to promote self-reflection and dialogical feedback practices between student and marker. We provide a summary of the development of these interventions and share evaluations of both components. Overall, our evaluation demonstrated that these interventions can be a useful opportunity for students to engage with their feedback practices and develop feedback literacy. However, variability in student experiences and inconsistencies across markers, despite these interventions, were barriers to success. We contextualise this with our own reflections and end with recommendations for educators.
{"title":"Development and evaluation of two interventions to improve students’ reflection on feedback","authors":"Richard Harris, P. Blundell-Birtill, Madeleine Pownall","doi":"10.1080/02602938.2022.2107999","DOIUrl":"https://doi.org/10.1080/02602938.2022.2107999","url":null,"abstract":"Abstract National student surveys reveal that feedback is an aspect of the education experience with which students are less satisfied. One method to increase student engagement with their written feedback and to improve feedback literacy is promotion of critical self-reflection on the feedback content. We describe two interventions aimed at improving students’ reflective practices with their feedback. In a School of Psychology at a UK research-intensive university, we designed, implemented and evaluated two interventions to improve students’ reflection on, and engagement with, their feedback. The first intervention was a feedback seminar, which comprised a modified version of the Developing Engagement with Feedback Toolkit, adapted for our context and online delivery. The second intervention was an interactive assessment coversheet that was designed to promote self-reflection and dialogical feedback practices between student and marker. We provide a summary of the development of these interventions and share evaluations of both components. Overall, our evaluation demonstrated that these interventions can be a useful opportunity for students to engage with their feedback practices and develop feedback literacy. However, variability in student experiences and inconsistencies across markers, despite these interventions, were barriers to success. We contextualise this with our own reflections and end with recommendations for educators.","PeriodicalId":48267,"journal":{"name":"Assessment & Evaluation in Higher Education","volume":"48 1","pages":"672 - 685"},"PeriodicalIF":4.4,"publicationDate":"2022-08-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43443189","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-08-03DOI: 10.1080/02602938.2022.2099811
Javier Fernández Ruiz, E. Panadero
Abstract The design and implementation of assessment is one of the main challenges for university teachers, who claimed needing more and better professional development courses in such area. Our study aimed to analyse at a nationwide level how public universities (N = 50) design and implement their assessment professional development courses and programmes. Every professional development course from Spanish public universities (N = 1627) was screened, and data from all available assessment courses (N = 82) was collected and analysed. These courses were compared in terms of total length, evaluation of the course learning results, and content knowledge covered. Regarding total length, most universities have little offer of courses, both in terms of quantity and duration. However, there are exceptions that implement longer and more intensive courses. Regarding the evaluation of the courses results, many universities do not evaluate teachers’ learning and the ones which do it tend to use passive methods such as attendance. Regarding content knowledge covered, online assessment is the most frequent topic, but important areas such as self- and peer assessment or feedback are vastly underrepresented. Our conclusion is that there is a large room for improvement in ADPC and we propose some recommendations aligned with existent literature.
{"title":"Assessment professional development courses for university teachers: a nationwide analysis exploring length, evaluation and content","authors":"Javier Fernández Ruiz, E. Panadero","doi":"10.1080/02602938.2022.2099811","DOIUrl":"https://doi.org/10.1080/02602938.2022.2099811","url":null,"abstract":"Abstract The design and implementation of assessment is one of the main challenges for university teachers, who claimed needing more and better professional development courses in such area. Our study aimed to analyse at a nationwide level how public universities (N = 50) design and implement their assessment professional development courses and programmes. Every professional development course from Spanish public universities (N = 1627) was screened, and data from all available assessment courses (N = 82) was collected and analysed. These courses were compared in terms of total length, evaluation of the course learning results, and content knowledge covered. Regarding total length, most universities have little offer of courses, both in terms of quantity and duration. However, there are exceptions that implement longer and more intensive courses. Regarding the evaluation of the courses results, many universities do not evaluate teachers’ learning and the ones which do it tend to use passive methods such as attendance. Regarding content knowledge covered, online assessment is the most frequent topic, but important areas such as self- and peer assessment or feedback are vastly underrepresented. Our conclusion is that there is a large room for improvement in ADPC and we propose some recommendations aligned with existent literature.","PeriodicalId":48267,"journal":{"name":"Assessment & Evaluation in Higher Education","volume":"48 1","pages":"485 - 501"},"PeriodicalIF":4.4,"publicationDate":"2022-08-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43739083","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-08-03DOI: 10.1080/02602938.2022.2107168
Edd Pitt, N. Winstone
There has been a clear shift in the representation of feedback in the scholarly literature. Whereas feedback was once framed as the information provided by teachers to their students on their work, recent years have witnessed greater recognition of the agentic role of students in feedback processes, in terms of their responsibilities to process and enact feedback to inform their learning (e.g. Boud and Molloy 2013; Winstone, Pitt, and Nash 2021). Whilst there is a growing appreciation that the true impact of feedback comes not from what teachers do but from what students do, this does not mean that the role of teachers is redundant. Feedback design is an important activity for teachers, thus creating environments in which learners can take on greater responsibility in feedback processes. Alongside increasing emphasis on the role of students in feedback processes has been the development of a body of research exploring the skills and capacities of students that facilitate such involvement. Such skills and capacities are most commonly discussed as part of frameworks for ‘student feedback literacy’ (Sutton 2012; Carless and Boud 2018; Molloy, Boud, and Henderson 2020). The publication of these frameworks has instigated an explosion of conceptual and empirical work on the topic of feedback literacy, including ecological and sociomaterial perspectives (e.g. Chong 2021; Gravett 2022), the development of tools for its measurement (e.g. Zhan 2021; Song 2022; Yu, Di Zhang, and Liu 2022), and pedagogic approaches to the development of students’ feedback literacy (e.g. Winstone, Mathlin, and Nash 2019; Ketonen, Nieminen, and Hähkiöniemi 2020; Malecka, Boud, and Carless 2020; Fernández-Toro and Duensing 2021; Hoo, Deneen, and Boud 2022; Man, Kong, and Chau 2022; Winstone, Balloo, et al. 2022). Approaches to the development of student feedback literacy recognise the important role of teachers in enabling students to develop their own understandings of feedback processes. In this way, then, teachers also hold skills and capacities related to their practice in feedback processes. Carless and Winstone (2020) built upon Carless and Boud (2018) framework for student feedback literacy to propose a conceptual framework for teacher feedback literacy. They defined teacher feedback literacy as ‘knowledge, expertise and dispositions to design feedback processes in ways which enable student uptake of feedback and seed the development of student feedback literacy’ (Carless and Winstone 2020, p. 4). They outlined three dimensions of teacher feedback literacy: design (planning curricula and assessment tasks such that students come to appreciate the purpose of feedback, build the capacity for evaluative judgement, and take responsibility for implementing feedback information) relational (showing emotional sensitivity and empathy in feedback processes, and building trust with students) and pragmatic (managing the tensions created by competing functions of feedback, making dec
在学术文献中,反馈的表现有了明显的转变。虽然反馈曾经被定义为教师向学生提供的关于他们工作的信息,但近年来,人们越来越认识到学生在反馈过程中的代理作用,因为他们有责任处理和制定反馈以指导他们的学习(例如Boud和Molloy 2013;温斯顿,皮特,纳什2021)。虽然越来越多的人认识到,反馈的真正影响不是来自教师的行为,而是来自学生的行为,但这并不意味着教师的角色是多余的。反馈设计对教师来说是一项重要的活动,它可以创造一种环境,让学习者在反馈过程中承担更大的责任。除了越来越强调学生在反馈过程中的作用外,还发展了一系列研究,探索学生促进这种参与的技能和能力。这些技能和能力最常作为“学生反馈素养”框架的一部分被讨论(Sutton 2012;Carless and Boud 2018;Molloy, Boud和Henderson 2020)。这些框架的出版引发了关于反馈素养主题的概念和实证工作的爆炸式增长,包括生态和社会材料观点(例如Chong 2021;Gravett 2022),测量工具的开发(例如Zhan 2021;首歌2022;Yu, Di Zhang, and Liu 2022),以及培养学生反馈素养的教学方法(例如Winstone, Mathlin, and Nash 2019;Ketonen, Nieminen和Hähkiöniemi 2020;Malecka, Boud和Carless 2020;Fernández-Toro和Duensing 2021;Hoo, Deneen, and Boud 2022;文、港、洲2022;Winstone, Balloo等人,2022)。发展学生反馈素养的方法认识到教师在使学生发展自己对反馈过程的理解方面的重要作用。这样,教师在反馈过程中也掌握了与实践相关的技能和能力。Carless和Winstone(2020)在Carless和Boud(2018)的学生反馈素养框架的基础上,提出了教师反馈素养的概念框架。他们将教师反馈素养定义为“设计反馈过程的知识、专业技能和倾向,使学生能够接受反馈,并为学生反馈素养的发展打下基础”(Carless和Winstone 2020,第4页)。他们概述了教师反馈素养的三个维度:设计(规划课程和评估任务,使学生能够理解反馈的目的,建立评估判断的能力,并承担实施反馈信息的责任)关系(在反馈过程中表现出情感敏感性和同理心,并与学生建立信任)和务实(管理反馈的竞争功能所产生的紧张关系);做出关于工作负荷的决定,以便将时间投入到可能产生影响的反馈中,并在利用规程的支持的同时管理约束)。本期特刊中的文章对教师反馈素养的概念采取了截然不同的方法;它们强调了认识到复杂性和可能性的重要性
{"title":"Enabling and valuing feedback literacies","authors":"Edd Pitt, N. Winstone","doi":"10.1080/02602938.2022.2107168","DOIUrl":"https://doi.org/10.1080/02602938.2022.2107168","url":null,"abstract":"There has been a clear shift in the representation of feedback in the scholarly literature. Whereas feedback was once framed as the information provided by teachers to their students on their work, recent years have witnessed greater recognition of the agentic role of students in feedback processes, in terms of their responsibilities to process and enact feedback to inform their learning (e.g. Boud and Molloy 2013; Winstone, Pitt, and Nash 2021). Whilst there is a growing appreciation that the true impact of feedback comes not from what teachers do but from what students do, this does not mean that the role of teachers is redundant. Feedback design is an important activity for teachers, thus creating environments in which learners can take on greater responsibility in feedback processes. Alongside increasing emphasis on the role of students in feedback processes has been the development of a body of research exploring the skills and capacities of students that facilitate such involvement. Such skills and capacities are most commonly discussed as part of frameworks for ‘student feedback literacy’ (Sutton 2012; Carless and Boud 2018; Molloy, Boud, and Henderson 2020). The publication of these frameworks has instigated an explosion of conceptual and empirical work on the topic of feedback literacy, including ecological and sociomaterial perspectives (e.g. Chong 2021; Gravett 2022), the development of tools for its measurement (e.g. Zhan 2021; Song 2022; Yu, Di Zhang, and Liu 2022), and pedagogic approaches to the development of students’ feedback literacy (e.g. Winstone, Mathlin, and Nash 2019; Ketonen, Nieminen, and Hähkiöniemi 2020; Malecka, Boud, and Carless 2020; Fernández-Toro and Duensing 2021; Hoo, Deneen, and Boud 2022; Man, Kong, and Chau 2022; Winstone, Balloo, et al. 2022). Approaches to the development of student feedback literacy recognise the important role of teachers in enabling students to develop their own understandings of feedback processes. In this way, then, teachers also hold skills and capacities related to their practice in feedback processes. Carless and Winstone (2020) built upon Carless and Boud (2018) framework for student feedback literacy to propose a conceptual framework for teacher feedback literacy. They defined teacher feedback literacy as ‘knowledge, expertise and dispositions to design feedback processes in ways which enable student uptake of feedback and seed the development of student feedback literacy’ (Carless and Winstone 2020, p. 4). They outlined three dimensions of teacher feedback literacy: design (planning curricula and assessment tasks such that students come to appreciate the purpose of feedback, build the capacity for evaluative judgement, and take responsibility for implementing feedback information) relational (showing emotional sensitivity and empathy in feedback processes, and building trust with students) and pragmatic (managing the tensions created by competing functions of feedback, making dec","PeriodicalId":48267,"journal":{"name":"Assessment & Evaluation in Higher Education","volume":"48 1","pages":"149 - 157"},"PeriodicalIF":4.4,"publicationDate":"2022-08-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43125226","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-23DOI: 10.1080/02602938.2022.2102137
Rong Yang, G. Hu, Jun Lei
Abstract This study explores Chinese English-major students’ intertextual competence and factors shaping their ability to paraphrase in academic writing. Multiple instruments were employed to collect data from 212 English-major students at different academic levels from nine universities in mainland China. The data were analyzed to determine how a range of personal and contextual variables related to their ability to paraphrase an academic text. Two groups of variables were found to be associated with their performance on the paraphrasing task. The first group comprised knowledge and attitudinal variables, including previous training on plagiarism, knowledge of blatant plagiarism, inability to recognize plagiarized texts, and condemnatory attitudes toward plagiarism. The second group consisted of ability measures or their proxy variables, namely English proficiency, instructional context and inadequate academic ability as a perceived cause of plagiarism. The observed relationship between the two groups of variables indicated that the effects of knowledge and attitudinal variables depended on or were mediated by the ability variables. These findings call for a multipronged and coordinated pedagogical approach to developing students’ intertextual competence.
{"title":"Understanding Chinese English-major students’ intertextual competence and contributing factors","authors":"Rong Yang, G. Hu, Jun Lei","doi":"10.1080/02602938.2022.2102137","DOIUrl":"https://doi.org/10.1080/02602938.2022.2102137","url":null,"abstract":"Abstract This study explores Chinese English-major students’ intertextual competence and factors shaping their ability to paraphrase in academic writing. Multiple instruments were employed to collect data from 212 English-major students at different academic levels from nine universities in mainland China. The data were analyzed to determine how a range of personal and contextual variables related to their ability to paraphrase an academic text. Two groups of variables were found to be associated with their performance on the paraphrasing task. The first group comprised knowledge and attitudinal variables, including previous training on plagiarism, knowledge of blatant plagiarism, inability to recognize plagiarized texts, and condemnatory attitudes toward plagiarism. The second group consisted of ability measures or their proxy variables, namely English proficiency, instructional context and inadequate academic ability as a perceived cause of plagiarism. The observed relationship between the two groups of variables indicated that the effects of knowledge and attitudinal variables depended on or were mediated by the ability variables. These findings call for a multipronged and coordinated pedagogical approach to developing students’ intertextual competence.","PeriodicalId":48267,"journal":{"name":"Assessment & Evaluation in Higher Education","volume":"48 1","pages":"657 - 671"},"PeriodicalIF":4.4,"publicationDate":"2022-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48073134","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-15DOI: 10.1080/02602938.2022.2093835
Roland Molontay, Marcell Nagy
Abstract An essential task in higher education is to construct a fair admission procedure. A great deal of research has been conducted on a central aspect of admission: predictive validity. However, to the best of our knowledge, this is the first study that investigates how the predictive validity of a composite admission score could be improved without redesigning the tests and introducing new measures. In this study, relying on the existing instruments of the Hungarian nationally standardized university entrance score, we construct an alternative score that not only has higher predictive validity but also a lower variation across disciplines and a smaller under- and overprediction bias in various student groups. To measure the predictive validity, we use an advanced statistical framework. The analysis relies on data of 24,675 students enrolled in the undergraduate programs of the Budapest University of Technology and Economics. We find that while the current score is effective in predicting university success, its predictive validity can be improved by a few changes: lifting the branching nature of the admission, focusing on general rather than program-specific knowledge, and introducing a multiplicative rewarding scheme for advanced level examinations.
{"title":"How to improve the predictive validity of a composite admission score? A case study from Hungary","authors":"Roland Molontay, Marcell Nagy","doi":"10.1080/02602938.2022.2093835","DOIUrl":"https://doi.org/10.1080/02602938.2022.2093835","url":null,"abstract":"Abstract An essential task in higher education is to construct a fair admission procedure. A great deal of research has been conducted on a central aspect of admission: predictive validity. However, to the best of our knowledge, this is the first study that investigates how the predictive validity of a composite admission score could be improved without redesigning the tests and introducing new measures. In this study, relying on the existing instruments of the Hungarian nationally standardized university entrance score, we construct an alternative score that not only has higher predictive validity but also a lower variation across disciplines and a smaller under- and overprediction bias in various student groups. To measure the predictive validity, we use an advanced statistical framework. The analysis relies on data of 24,675 students enrolled in the undergraduate programs of the Budapest University of Technology and Economics. We find that while the current score is effective in predicting university success, its predictive validity can be improved by a few changes: lifting the branching nature of the admission, focusing on general rather than program-specific knowledge, and introducing a multiplicative rewarding scheme for advanced level examinations.","PeriodicalId":48267,"journal":{"name":"Assessment & Evaluation in Higher Education","volume":"48 1","pages":"419 - 437"},"PeriodicalIF":4.4,"publicationDate":"2022-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45366624","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-13DOI: 10.1080/02602938.2022.2097197
Monika Pazio Rossiter
Abstract Conceptualising feedback as dialogue places even greater importance on successful interpretation of the message as a crucial step leading to the uptake of feedback. This interpretation is not always straightforward as it takes place through a cultural and linguistic lens that international students bring to feedback conversations. This research explored the role that sociocultural competence plays in students’ uptake of feedback, unpacking the broader role of language and culture in feedback. Interviews with 13 European science, technology, engineering and medicine (STEM) students uncover the variety of experiences and conceptualisations that influence their interpretation of feedback messages. At a theoretical level, the findings call for a greater consideration of the cultural dimension in feedback literacy discourse; at a practical level they call for a greater consideration for developing sociocultural competence for students transitioning to new cultural contexts.
{"title":"‘What you mean versus what you say’ – exploring the role of language and culture in European students’ interpretation of feedback","authors":"Monika Pazio Rossiter","doi":"10.1080/02602938.2022.2097197","DOIUrl":"https://doi.org/10.1080/02602938.2022.2097197","url":null,"abstract":"Abstract Conceptualising feedback as dialogue places even greater importance on successful interpretation of the message as a crucial step leading to the uptake of feedback. This interpretation is not always straightforward as it takes place through a cultural and linguistic lens that international students bring to feedback conversations. This research explored the role that sociocultural competence plays in students’ uptake of feedback, unpacking the broader role of language and culture in feedback. Interviews with 13 European science, technology, engineering and medicine (STEM) students uncover the variety of experiences and conceptualisations that influence their interpretation of feedback messages. At a theoretical level, the findings call for a greater consideration of the cultural dimension in feedback literacy discourse; at a practical level they call for a greater consideration for developing sociocultural competence for students transitioning to new cultural contexts.","PeriodicalId":48267,"journal":{"name":"Assessment & Evaluation in Higher Education","volume":"48 1","pages":"544 - 555"},"PeriodicalIF":4.4,"publicationDate":"2022-07-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41780125","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-13DOI: 10.1080/02602938.2022.2081668
Samuel Cunningham, Melinda Laundon, A. Cathcart, M. A. Bashar, R. Nayak
ABSTRACT Student evaluation of teaching (SET) surveys are the most widely used tool for collecting higher education student feedback to inform academic quality improvement, promotion and recruitment processes. Malicious and abusive student comments in SET surveys have the potential to harm the wellbeing and career prospects of academics. Despite much literature highlighting abusive feedback in SET surveys, little research attention has been given to methods for screening student comments to identify and remove those that may cause harm to academics. This project applied innovative machine learning techniques, along with a dictionary of keywords to screen more than 100,000 student comments made via a university SET during 2021. The study concluded that these methods, when used in conjunction with a final stage of human checking, are an effective and practicable means of screening student comments. Higher education institutions have an obligation to balance the rights of students to provide feedback on their learning experience with a duty to protect academics from harm by pre-screening student comments before releasing SET results to academics.
{"title":"First, do no harm: automated detection of abusive comments in student evaluation of teaching surveys","authors":"Samuel Cunningham, Melinda Laundon, A. Cathcart, M. A. Bashar, R. Nayak","doi":"10.1080/02602938.2022.2081668","DOIUrl":"https://doi.org/10.1080/02602938.2022.2081668","url":null,"abstract":"ABSTRACT Student evaluation of teaching (SET) surveys are the most widely used tool for collecting higher education student feedback to inform academic quality improvement, promotion and recruitment processes. Malicious and abusive student comments in SET surveys have the potential to harm the wellbeing and career prospects of academics. Despite much literature highlighting abusive feedback in SET surveys, little research attention has been given to methods for screening student comments to identify and remove those that may cause harm to academics. This project applied innovative machine learning techniques, along with a dictionary of keywords to screen more than 100,000 student comments made via a university SET during 2021. The study concluded that these methods, when used in conjunction with a final stage of human checking, are an effective and practicable means of screening student comments. Higher education institutions have an obligation to balance the rights of students to provide feedback on their learning experience with a duty to protect academics from harm by pre-screening student comments before releasing SET results to academics.","PeriodicalId":48267,"journal":{"name":"Assessment & Evaluation in Higher Education","volume":"48 1","pages":"377 - 389"},"PeriodicalIF":4.4,"publicationDate":"2022-07-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47754083","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}