Implications of training university teachers in developing local writing rating scales

IF 0.1 Q4 LINGUISTICS Studies in Language Assessment Pub Date : 2022-01-01 DOI:10.58379/xvdf9070
O. Kvasova, Lyudmyla Hnapovska, V. Kalinichenko, Luliia Budas
{"title":"Implications of training university teachers in developing local writing rating scales","authors":"O. Kvasova, Lyudmyla Hnapovska, V. Kalinichenko, Luliia Budas","doi":"10.58379/xvdf9070","DOIUrl":null,"url":null,"abstract":"Language assessment literacy is currently in search of new, modern conceptualisations in which contextual factors have a growing significance and impact (Tsagari, 2020). This article presents an initiative to promote writing assessment literacy in a culture-specific educational context. Assessment of writing belongs to the under-researched areas in Ukrainian higher education, wherein teachers have to act as raters and as rating scale developers without being properly trained in language assessment. The gaps in writing assessment literacy prompted research into the strengths and weaknesses of using a local rating scale developed by university teachers. It was conducted within an Erasmus + Staff mobility project in 2016-2019 and followed up by dissemination events held in several universities in Ukraine. The сurrent study aims to explore the impact of training in writing assessment on the processes and outcomes of university teachers’ development and use of analytic rating scales. The paper analyses how three teams of teachers from different universities coped with the task, and whether the training they underwent enabled them to design well-performing rating scales. The nine participants in the study developed three local context-specific analytic rating scales following the intuitive method of scale design, detailed in the guidelines prepared by the trainer. Given the same context (ESP) and the same CEFR level (B1 ->B2), we managed to compare the three local rating scales. The study testifies to a positive impact of the training on teachers’ literacy in writing assessment.","PeriodicalId":29650,"journal":{"name":"Studies in Language Assessment","volume":null,"pages":null},"PeriodicalIF":0.1000,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Studies in Language Assessment","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.58379/xvdf9070","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"LINGUISTICS","Score":null,"Total":0}
引用次数: 0

Abstract

Language assessment literacy is currently in search of new, modern conceptualisations in which contextual factors have a growing significance and impact (Tsagari, 2020). This article presents an initiative to promote writing assessment literacy in a culture-specific educational context. Assessment of writing belongs to the under-researched areas in Ukrainian higher education, wherein teachers have to act as raters and as rating scale developers without being properly trained in language assessment. The gaps in writing assessment literacy prompted research into the strengths and weaknesses of using a local rating scale developed by university teachers. It was conducted within an Erasmus + Staff mobility project in 2016-2019 and followed up by dissemination events held in several universities in Ukraine. The сurrent study aims to explore the impact of training in writing assessment on the processes and outcomes of university teachers’ development and use of analytic rating scales. The paper analyses how three teams of teachers from different universities coped with the task, and whether the training they underwent enabled them to design well-performing rating scales. The nine participants in the study developed three local context-specific analytic rating scales following the intuitive method of scale design, detailed in the guidelines prepared by the trainer. Given the same context (ESP) and the same CEFR level (B1 ->B2), we managed to compare the three local rating scales. The study testifies to a positive impact of the training on teachers’ literacy in writing assessment.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
培训大学教师编制地方写作评定量表的意义
语言评估素养目前正在寻找新的、现代的概念,其中语境因素具有越来越大的意义和影响(Tsagari, 2020)。本文提出了在特定文化教育背景下促进写作评估素养的倡议。写作评估属于乌克兰高等教育研究不足的领域,其中教师必须充当评分员和评级量表开发人员,而没有经过适当的语言评估培训。写作评估能力的差距促使人们研究使用大学教师开发的本地评分量表的优缺点。它是在2016-2019年伊拉斯谟+员工流动项目中进行的,随后在乌克兰的几所大学举行了传播活动。本研究旨在探讨写作评估训练对大学教师发展过程和结果的影响,以及分析性评分量表的使用。本文分析了来自不同大学的三组教师是如何完成任务的,以及他们所接受的培训是否使他们能够设计出良好的评分量表。研究的九名参与者按照直观的量表设计方法,开发了三种当地具体情况的分析评定量表,具体内容见培训师编写的指导方针。给定相同的上下文(ESP)和相同的CEFR级别(B1 ->B2),我们设法比较了三种本地评级量表。本研究证明了培训对教师写作素养评估的积极影响。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Contextual variables in written assessment feedback in a university-level Spanish program The effect of in-class and one-on-one video feedback on EFL learners’ English public speaking competency and anxiety Gebril, A. (Ed.) Learning-Oriented Language Assessment: Putting Theory into Practice. Is the devil you know better? Testwiseness and eliciting evidence of interactional competence in familiar versus unfamiliar triadic speaking tasks The meaningfulness of two curriculum-based national tests of English as a foreign language
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1