Pub Date : 2023-01-02DOI: 10.1080/02602938.2022.2052800
Marloes L. Nederhand, Judith Auer, B. Giesbers, Ad W. A. Scheepers, Elise van der Gaag
Abstract Student evaluations of teaching (SET) are an influential – and often sole – tool in higher education to determine course and teacher effectiveness. It is therefore problematic that SET results are disturbed by low response rates and response quality. An important factor discussed in prior research to increase SET effectiveness and students’ motivation to participate is transparency about how their feedback is being applied in practice. The current study is the first to empirically test effects of transparency in a quasi-experimental field setting. After students filled in the SET, the intervention group was given a summary of the students’ comments and how the teacher will use these to improve the course. We examined student participation on subsequent course evaluations. In contrast to our expectations, there was no significant improvement in response rates nor response quality between the intervention and control group. Furthermore, perceptions of meaningfulness did not significantly differ between the control and intervention group. This study indicates that more empirical research is needed to define the conditions under which transparency influences student participation. Further implications and recommendations for future research are discussed.
{"title":"Improving student participation in SET: effects of increased transparency on the use of student feedback in practice","authors":"Marloes L. Nederhand, Judith Auer, B. Giesbers, Ad W. A. Scheepers, Elise van der Gaag","doi":"10.1080/02602938.2022.2052800","DOIUrl":"https://doi.org/10.1080/02602938.2022.2052800","url":null,"abstract":"Abstract Student evaluations of teaching (SET) are an influential – and often sole – tool in higher education to determine course and teacher effectiveness. It is therefore problematic that SET results are disturbed by low response rates and response quality. An important factor discussed in prior research to increase SET effectiveness and students’ motivation to participate is transparency about how their feedback is being applied in practice. The current study is the first to empirically test effects of transparency in a quasi-experimental field setting. After students filled in the SET, the intervention group was given a summary of the students’ comments and how the teacher will use these to improve the course. We examined student participation on subsequent course evaluations. In contrast to our expectations, there was no significant improvement in response rates nor response quality between the intervention and control group. Furthermore, perceptions of meaningfulness did not significantly differ between the control and intervention group. This study indicates that more empirical research is needed to define the conditions under which transparency influences student participation. Further implications and recommendations for future research are discussed.","PeriodicalId":48267,"journal":{"name":"Assessment & Evaluation in Higher Education","volume":"48 1","pages":"107 - 120"},"PeriodicalIF":4.4,"publicationDate":"2023-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47553870","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-12-28DOI: 10.1080/02602938.2022.2158453
Timon de Boer, Frank J. van Rijnsoever
Abstract Prospective graduate students are usually required to have attained an undergraduate degree in a related field and high prior grades to gain admission. There is consensus that some relatedness between the students’ undergraduate and graduate programs is required for admission. We propose a new measurement for this relatedness using cosine similarity, a method that has been tried and tested in fields such as bibliometric sciences and economic geography. We used this measurement to calculate the relatedness between a student’s undergraduate and graduate program, and tested the effect of this measure on study success. Our models show that there is an interaction effect between undergraduate grades and cognitive relatedness on graduate grades. For bachelor students with high cognitive relatedness, the relationship between bachelor grades and master grades is about twice as strong compared to bachelor students with low cognitive relatedness. This is an important finding because it shows that undergraduate grades, the most common admission instrument in higher education, have limited usefulness for students with relatively unrelated undergraduate programs. Admissions officers need to carefully assess their admission instruments for such students and rely less on grades when it comes to the decision to admit students.
{"title":"One field too far?","authors":"Timon de Boer, Frank J. van Rijnsoever","doi":"10.1080/02602938.2022.2158453","DOIUrl":"https://doi.org/10.1080/02602938.2022.2158453","url":null,"abstract":"Abstract Prospective graduate students are usually required to have attained an undergraduate degree in a related field and high prior grades to gain admission. There is consensus that some relatedness between the students’ undergraduate and graduate programs is required for admission. We propose a new measurement for this relatedness using cosine similarity, a method that has been tried and tested in fields such as bibliometric sciences and economic geography. We used this measurement to calculate the relatedness between a student’s undergraduate and graduate program, and tested the effect of this measure on study success. Our models show that there is an interaction effect between undergraduate grades and cognitive relatedness on graduate grades. For bachelor students with high cognitive relatedness, the relationship between bachelor grades and master grades is about twice as strong compared to bachelor students with low cognitive relatedness. This is an important finding because it shows that undergraduate grades, the most common admission instrument in higher education, have limited usefulness for students with relatively unrelated undergraduate programs. Admissions officers need to carefully assess their admission instruments for such students and rely less on grades when it comes to the decision to admit students.","PeriodicalId":48267,"journal":{"name":"Assessment & Evaluation in Higher Education","volume":"5 1","pages":"966 - 979"},"PeriodicalIF":4.4,"publicationDate":"2022-12-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78291895","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-12-23DOI: 10.1080/02602938.2022.2161089
Linlin Xu, Tiefu Zhang
Abstract Despite a surge of research interest in feedback engagement in higher education, postgraduate students’ engagement with multiple sources of feedback in the context of academic writing is understudied. Informed by the notion of feedback literacy, we explore how Chinese postgraduate students perceive and engage with automated, peer and teacher feedback, as well as the feedback process as a whole, in academic writing. The analysis of 120 students’ diaries and ten students’ interviews shows that multiple sources of feedback and related activities complement each other in feedback areas (e.g. grammar, content, structure), perspectives (e.g. reader, expert) and depth of improvement to the writing (e.g. from correction to polishing or refinement). We conclude that engaging with multiple sources of feedback supports students’ writing and learning. This study adds to the literature by revealing the social, co-constructed, complementary and enabling nature of feedback engagement. The students engage with multiple sources of feedback and related activities internally, externally, proactively, critically and collaboratively in the intrapersonal, interpersonal and human-material spheres.
{"title":"Engaging with multiple sources of feedback in academic writing: postgraduate students’ perspectives","authors":"Linlin Xu, Tiefu Zhang","doi":"10.1080/02602938.2022.2161089","DOIUrl":"https://doi.org/10.1080/02602938.2022.2161089","url":null,"abstract":"Abstract Despite a surge of research interest in feedback engagement in higher education, postgraduate students’ engagement with multiple sources of feedback in the context of academic writing is understudied. Informed by the notion of feedback literacy, we explore how Chinese postgraduate students perceive and engage with automated, peer and teacher feedback, as well as the feedback process as a whole, in academic writing. The analysis of 120 students’ diaries and ten students’ interviews shows that multiple sources of feedback and related activities complement each other in feedback areas (e.g. grammar, content, structure), perspectives (e.g. reader, expert) and depth of improvement to the writing (e.g. from correction to polishing or refinement). We conclude that engaging with multiple sources of feedback supports students’ writing and learning. This study adds to the literature by revealing the social, co-constructed, complementary and enabling nature of feedback engagement. The students engage with multiple sources of feedback and related activities internally, externally, proactively, critically and collaboratively in the intrapersonal, interpersonal and human-material spheres.","PeriodicalId":48267,"journal":{"name":"Assessment & Evaluation in Higher Education","volume":"162 1","pages":"995 - 1008"},"PeriodicalIF":4.4,"publicationDate":"2022-12-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84638426","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-12-18DOI: 10.1080/02602938.2022.2157795
A. Tam, Gigi Kai Yin Auyeung
Abstract Substantial research indicates that students’ beliefs mediate learning strategies. Nevertheless, students’ strategy-related beliefs about feedback and their mediation on strategies to act on feedback are insufficiently addressed. This case study aimed to examine: (1) students’ strategy-related beliefs about feedback in L2 writing and (2) how beliefs mediated students’ actions when a feedback intervention was implemented to facilitate students’ use of feedback strategies. Data were collected from semi-structured interviews and reflective journals from 10 Year-1 participants who studied in an associate degree programme at a higher education institute in Hong Kong. Students’ strategy-related beliefs about feedback encompass the evaluation of strategies about: (1) understanding feedback, (2) addressing areas for improvement, (3) seeking feedback, and (4) implementing feedback. Nine participants adopted a more comprehensive range of strategies to act on feedback consistent with their strategy-related beliefs. One participant, however, did not appreciate and use feedback. Pedagogical implications are discussed.
{"title":"Exploring students’ strategy-related beliefs and their mediation on strategies to act on feedback in L2 writing","authors":"A. Tam, Gigi Kai Yin Auyeung","doi":"10.1080/02602938.2022.2157795","DOIUrl":"https://doi.org/10.1080/02602938.2022.2157795","url":null,"abstract":"Abstract Substantial research indicates that students’ beliefs mediate learning strategies. Nevertheless, students’ strategy-related beliefs about feedback and their mediation on strategies to act on feedback are insufficiently addressed. This case study aimed to examine: (1) students’ strategy-related beliefs about feedback in L2 writing and (2) how beliefs mediated students’ actions when a feedback intervention was implemented to facilitate students’ use of feedback strategies. Data were collected from semi-structured interviews and reflective journals from 10 Year-1 participants who studied in an associate degree programme at a higher education institute in Hong Kong. Students’ strategy-related beliefs about feedback encompass the evaluation of strategies about: (1) understanding feedback, (2) addressing areas for improvement, (3) seeking feedback, and (4) implementing feedback. Nine participants adopted a more comprehensive range of strategies to act on feedback consistent with their strategy-related beliefs. One participant, however, did not appreciate and use feedback. Pedagogical implications are discussed.","PeriodicalId":48267,"journal":{"name":"Assessment & Evaluation in Higher Education","volume":"69 1","pages":"1038 - 1052"},"PeriodicalIF":4.4,"publicationDate":"2022-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79501535","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-12-18DOI: 10.1080/02602938.2022.2156478
S. Sun, X. Gao, B. Rahmani, Priyanka Bose, C. Davison
Abstract In recent years, there has been a growing body of research on student voice in the context of higher education, generating significant insights for pedagogical improvement. This systematic literature review aims to examine studies on student voice from 2011 to 2022, specifically those concerned with assessment and feedback in higher education. The review draws on 38 empirical studies and identifies the increasing use of mixed-methods designs in student voice research related to assessment and feedback. The analysis of these studies highlights that student voice research can improve students’ experiences, change teachers’ practices and inform university support concerning assessment and feedback. The review finds, however, that most studies were conducted in, or on students from, developed countries. It is necessary for researchers to engage students from different backgrounds to investigate their experiences of assessment and feedback. The results of the review also suggest that more longitudinal, mixed-methods studies should be conducted to generate further critical insights. Future research should regard students as partners in effective assessment and feedback practices, and a focus should be placed on students developing assessment and feedback literacy.
{"title":"Student voice in assessment and feedback (2011–2022): a systematic review","authors":"S. Sun, X. Gao, B. Rahmani, Priyanka Bose, C. Davison","doi":"10.1080/02602938.2022.2156478","DOIUrl":"https://doi.org/10.1080/02602938.2022.2156478","url":null,"abstract":"Abstract In recent years, there has been a growing body of research on student voice in the context of higher education, generating significant insights for pedagogical improvement. This systematic literature review aims to examine studies on student voice from 2011 to 2022, specifically those concerned with assessment and feedback in higher education. The review draws on 38 empirical studies and identifies the increasing use of mixed-methods designs in student voice research related to assessment and feedback. The analysis of these studies highlights that student voice research can improve students’ experiences, change teachers’ practices and inform university support concerning assessment and feedback. The review finds, however, that most studies were conducted in, or on students from, developed countries. It is necessary for researchers to engage students from different backgrounds to investigate their experiences of assessment and feedback. The results of the review also suggest that more longitudinal, mixed-methods studies should be conducted to generate further critical insights. Future research should regard students as partners in effective assessment and feedback practices, and a focus should be placed on students developing assessment and feedback literacy.","PeriodicalId":48267,"journal":{"name":"Assessment & Evaluation in Higher Education","volume":"35 8 1","pages":"1009 - 1024"},"PeriodicalIF":4.4,"publicationDate":"2022-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74112781","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-12-15DOI: 10.1080/02602938.2022.2155611
Jiahui Luo, C. Chan
Abstract For students to sustain their learning beyond higher education, it is important for them to develop their evaluative judgement. Although the importance of evaluative judgement is well-established, the process through which students make such judgements remains contested. This study explores students’ evaluative judgement process by asking 20 engineering students to evaluate their own intercultural competence and that of other engineers in task-based interviews. The findings reveal that in the process of judgement-making, students negotiate and navigate multiple dimensions, including their ‘knowledge of intercultural competence’, ‘awareness of bias’, ‘attitude towards development’, ‘capability to judge’, ‘action towards improvement’ and ‘identity as assessor’. Building on these findings, the study further reconceptualises evaluative judgement as a negotiated process rather than a capability.
{"title":"Exploring the process of evaluative judgement: the case of engineering students judging intercultural competence","authors":"Jiahui Luo, C. Chan","doi":"10.1080/02602938.2022.2155611","DOIUrl":"https://doi.org/10.1080/02602938.2022.2155611","url":null,"abstract":"Abstract For students to sustain their learning beyond higher education, it is important for them to develop their evaluative judgement. Although the importance of evaluative judgement is well-established, the process through which students make such judgements remains contested. This study explores students’ evaluative judgement process by asking 20 engineering students to evaluate their own intercultural competence and that of other engineers in task-based interviews. The findings reveal that in the process of judgement-making, students negotiate and navigate multiple dimensions, including their ‘knowledge of intercultural competence’, ‘awareness of bias’, ‘attitude towards development’, ‘capability to judge’, ‘action towards improvement’ and ‘identity as assessor’. Building on these findings, the study further reconceptualises evaluative judgement as a negotiated process rather than a capability.","PeriodicalId":48267,"journal":{"name":"Assessment & Evaluation in Higher Education","volume":"74 1","pages":"951 - 965"},"PeriodicalIF":4.4,"publicationDate":"2022-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90145224","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-12-14DOI: 10.1080/02602938.2022.2155610
A. Payne, R. Ajjawi, J. Holloway
Abstract Modes of feedback such as audio or video are thought to foster relationality because they humanise feedback encounters. Few studies have examined teacher feedback literacies for relationality. This knowledge gap is significant as students want to be seen by their teachers and for their teachers to express care within the feedback encounter. Teacher feedback literacies are the knowledges, skills and dispositions needed to enhance and sustain a student-centred feedback process. Using a qualitative approach, our research question centred on what teacher feedback literacies and strategies are required to implement relational technology-enhanced feedback. We interviewed 10 higher education teachers with diverse characteristics and identified three teacher literacies for relational technology-enhanced feedback: socio-affective facilitates an awareness to student attitudes toward feedback and teacher self-expression; design empowers a consciousness of the logical arrangement and purpose of feedback to better prepare and engage students; and communication reflects the construction of a deliberate, empathetic message. The implications are for higher education institutions and teachers to consider how the relational can enable the strengths of feedback that can better support and encourage students’ engagement with feedback.
{"title":"Humanising feedback encounters: a qualitative study of relational literacies for teachers engaging in technology-enhanced feedback","authors":"A. Payne, R. Ajjawi, J. Holloway","doi":"10.1080/02602938.2022.2155610","DOIUrl":"https://doi.org/10.1080/02602938.2022.2155610","url":null,"abstract":"Abstract Modes of feedback such as audio or video are thought to foster relationality because they humanise feedback encounters. Few studies have examined teacher feedback literacies for relationality. This knowledge gap is significant as students want to be seen by their teachers and for their teachers to express care within the feedback encounter. Teacher feedback literacies are the knowledges, skills and dispositions needed to enhance and sustain a student-centred feedback process. Using a qualitative approach, our research question centred on what teacher feedback literacies and strategies are required to implement relational technology-enhanced feedback. We interviewed 10 higher education teachers with diverse characteristics and identified three teacher literacies for relational technology-enhanced feedback: socio-affective facilitates an awareness to student attitudes toward feedback and teacher self-expression; design empowers a consciousness of the logical arrangement and purpose of feedback to better prepare and engage students; and communication reflects the construction of a deliberate, empathetic message. The implications are for higher education institutions and teachers to consider how the relational can enable the strengths of feedback that can better support and encourage students’ engagement with feedback.","PeriodicalId":48267,"journal":{"name":"Assessment & Evaluation in Higher Education","volume":"390 1","pages":"903 - 914"},"PeriodicalIF":4.4,"publicationDate":"2022-12-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82127845","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-12-05DOI: 10.1080/02602938.2022.2152426
Andy Wakefield, Rebecca K. Pike, Sheila Amici-Dargan
Abstract Assessment and feedback are common sources of student dissatisfaction within higher education, and employers have shown dissatisfaction with graduates’ communication skills. Authentic assessment, containing ‘real-world’ context and student collaboration, provides a means to address both issues simultaneously. We discuss how we used authentic assessment within a biological sciences degree programme, replacing an individually written essay coursework assignment with learner-generated podcasts. We outline our implementation strategy in line with existing theory on learner-generated digital media, assessment for learning and self-regulated learning. Quantitative and qualitative analysis of survey data (2020 and 2021) indicate students prefer podcast assignments over traditional essay coursework, perceiving them to be more enjoyable, authentic, allowing for greater creativity, and better for building their confidence as communicators. Podcasting as an assessment may have a positive influence on knowledge retention and promote deep learning too. Our assessment design provides opportunities for community building, formative peer review and enhancing assessment literacy, while also being flexible enough to be used across any discipline of study and/or as an inter-disciplinary assessment.
{"title":"Learner-generated podcasts: an authentic and enjoyable assessment for students working in pairs","authors":"Andy Wakefield, Rebecca K. Pike, Sheila Amici-Dargan","doi":"10.1080/02602938.2022.2152426","DOIUrl":"https://doi.org/10.1080/02602938.2022.2152426","url":null,"abstract":"Abstract Assessment and feedback are common sources of student dissatisfaction within higher education, and employers have shown dissatisfaction with graduates’ communication skills. Authentic assessment, containing ‘real-world’ context and student collaboration, provides a means to address both issues simultaneously. We discuss how we used authentic assessment within a biological sciences degree programme, replacing an individually written essay coursework assignment with learner-generated podcasts. We outline our implementation strategy in line with existing theory on learner-generated digital media, assessment for learning and self-regulated learning. Quantitative and qualitative analysis of survey data (2020 and 2021) indicate students prefer podcast assignments over traditional essay coursework, perceiving them to be more enjoyable, authentic, allowing for greater creativity, and better for building their confidence as communicators. Podcasting as an assessment may have a positive influence on knowledge retention and promote deep learning too. Our assessment design provides opportunities for community building, formative peer review and enhancing assessment literacy, while also being flexible enough to be used across any discipline of study and/or as an inter-disciplinary assessment.","PeriodicalId":48267,"journal":{"name":"Assessment & Evaluation in Higher Education","volume":"112 1","pages":"1025 - 1037"},"PeriodicalIF":4.4,"publicationDate":"2022-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87109102","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-11-29DOI: 10.1080/02602938.2022.2150834
David Playfoot, Laura L. Wilkinson, Jessica K. Mead
Abstract This paper reports a series of studies that assessed the performance of students on continuous assessment components from two courses in an undergraduate psychology programme. Data were collected from two consecutive cohorts of students (total N = 576) and the grades of students were compared based on additional learning needs (ALN; ALN versus No ALN), whether or not the students had requested an extension to a deadline, and whether or not students had missed any of the tests that made up the continuous assessment component. Results showed no significant differences in attainment between students with and without ALN, supporting the argument that continuous assessment does not differentially impact students who already require additional support. Students who were granted deadline extensions achieved significantly lower scores, but only on the course with content that built week on week. Students who missed one or more tests achieved significantly lower scores even if the grade was calculated ignoring the questions that a student had not attempted. The implications of these findings for assessment practice in higher education are discussed.
{"title":"Is continuous assessment inclusive? An analysis of factors influencing student grades","authors":"David Playfoot, Laura L. Wilkinson, Jessica K. Mead","doi":"10.1080/02602938.2022.2150834","DOIUrl":"https://doi.org/10.1080/02602938.2022.2150834","url":null,"abstract":"Abstract This paper reports a series of studies that assessed the performance of students on continuous assessment components from two courses in an undergraduate psychology programme. Data were collected from two consecutive cohorts of students (total N = 576) and the grades of students were compared based on additional learning needs (ALN; ALN versus No ALN), whether or not the students had requested an extension to a deadline, and whether or not students had missed any of the tests that made up the continuous assessment component. Results showed no significant differences in attainment between students with and without ALN, supporting the argument that continuous assessment does not differentially impact students who already require additional support. Students who were granted deadline extensions achieved significantly lower scores, but only on the course with content that built week on week. Students who missed one or more tests achieved significantly lower scores even if the grade was calculated ignoring the questions that a student had not attempted. The implications of these findings for assessment practice in higher education are discussed.","PeriodicalId":48267,"journal":{"name":"Assessment & Evaluation in Higher Education","volume":"17 1","pages":"938 - 950"},"PeriodicalIF":4.4,"publicationDate":"2022-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72876498","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-11-12DOI: 10.1080/02602938.2022.2144801
Christopher Pearson, Nigel Penna
Abstract E-assessments are becoming increasingly common and progressively more complex. Consequently, how these longer, more complex questions are designed and marked is imperative. This article uses the NUMBAS e-assessment tool to investigate the best practice for creating longer questions and their mark schemes on surveying modules taken by engineering students at Newcastle University. Automated marking enables calculation of follow through marks when incorrect answers are used in subsequent parts. However, awarding follow through marks with no further penalty for solutions being fundamentally incorrect leads to non-normally distributed marks. Consequently, it was found that follow through marks should be awarded at 25% or 50% of the total available to produce a normal distribution. Appropriate question design is vital to enable automated method marking in longer style e-assessment with questions being split into multiple steps. Longer calculation questions split into too few parts led to all or nothing style questions and subsequently bi-modal mark distributions, whilst questions separated into too many parts provided too much guidance to students so did not adequately assess the learning outcomes, leading to unnaturally high marks. To balance these factors, we found that longer questions should be split into approximately 3–4 parts, although this is application dependent.
{"title":"Automated marking of longer computational questions in engineering subjects","authors":"Christopher Pearson, Nigel Penna","doi":"10.1080/02602938.2022.2144801","DOIUrl":"https://doi.org/10.1080/02602938.2022.2144801","url":null,"abstract":"Abstract E-assessments are becoming increasingly common and progressively more complex. Consequently, how these longer, more complex questions are designed and marked is imperative. This article uses the NUMBAS e-assessment tool to investigate the best practice for creating longer questions and their mark schemes on surveying modules taken by engineering students at Newcastle University. Automated marking enables calculation of follow through marks when incorrect answers are used in subsequent parts. However, awarding follow through marks with no further penalty for solutions being fundamentally incorrect leads to non-normally distributed marks. Consequently, it was found that follow through marks should be awarded at 25% or 50% of the total available to produce a normal distribution. Appropriate question design is vital to enable automated method marking in longer style e-assessment with questions being split into multiple steps. Longer calculation questions split into too few parts led to all or nothing style questions and subsequently bi-modal mark distributions, whilst questions separated into too many parts provided too much guidance to students so did not adequately assess the learning outcomes, leading to unnaturally high marks. To balance these factors, we found that longer questions should be split into approximately 3–4 parts, although this is application dependent.","PeriodicalId":48267,"journal":{"name":"Assessment & Evaluation in Higher Education","volume":"46 1","pages":"915 - 925"},"PeriodicalIF":4.4,"publicationDate":"2022-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85242902","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}