Examining severity and centrality effects in TestDaF writing and speaking assessments: An extended Bayesian many-facet Rasch analysis

IF 1 Q2 SOCIAL SCIENCES, INTERDISCIPLINARY International Journal of Testing Pub Date : 2021-09-03 DOI:10.1080/15305058.2021.1963260
T. Eckes, K. Jin
{"title":"Examining severity and centrality effects in TestDaF writing and speaking assessments: An extended Bayesian many-facet Rasch analysis","authors":"T. Eckes, K. Jin","doi":"10.1080/15305058.2021.1963260","DOIUrl":null,"url":null,"abstract":"Abstract Severity and centrality are two main kinds of rater effects posing threats to the validity and fairness of performance assessments. Adopting Jin and Wang’s (2018) extended facets modeling approach, we separately estimated the magnitude of rater severity and centrality effects in the web-based TestDaF (Test of German as a Foreign Language) writing and speaking assessments using Bayesian MCMC methods. The findings revealed that (a) the extended facets model had a better data–model fit than models that ignored either or both kinds of rater effects, (b) rating scale and partial credit versions of the extended model differed in terms of data–model fit for writing and speaking, (c) rater severity and centrality estimates were not significantly correlated with each other, and (d) centrality effects had a demonstrable impact on examinee rank orderings. The discussion focuses on implications for the analysis and evaluation of rating quality in performance assessments.","PeriodicalId":46615,"journal":{"name":"International Journal of Testing","volume":null,"pages":null},"PeriodicalIF":1.0000,"publicationDate":"2021-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"6","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Testing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1080/15305058.2021.1963260","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"SOCIAL SCIENCES, INTERDISCIPLINARY","Score":null,"Total":0}
引用次数: 6

Abstract

Abstract Severity and centrality are two main kinds of rater effects posing threats to the validity and fairness of performance assessments. Adopting Jin and Wang’s (2018) extended facets modeling approach, we separately estimated the magnitude of rater severity and centrality effects in the web-based TestDaF (Test of German as a Foreign Language) writing and speaking assessments using Bayesian MCMC methods. The findings revealed that (a) the extended facets model had a better data–model fit than models that ignored either or both kinds of rater effects, (b) rating scale and partial credit versions of the extended model differed in terms of data–model fit for writing and speaking, (c) rater severity and centrality estimates were not significantly correlated with each other, and (d) centrality effects had a demonstrable impact on examinee rank orderings. The discussion focuses on implications for the analysis and evaluation of rating quality in performance assessments.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
检验TestDaF写作和口语评估中的严重性和中心性效应:一种扩展的贝叶斯多方面Rasch分析
摘要严重性和中心性是两种主要的评分者效应,对绩效评估的有效性和公平性构成威胁。采用金和王(2018)的扩展facets建模方法,我们使用贝叶斯MCMC方法分别估计了基于网络的TestDaF(德语作为外语的测试)写作和口语评估中评分者严重性和中心性效应的大小。研究结果表明,(a)扩展facets模型比忽略任何一种或两种评分者效应的模型具有更好的数据-模型拟合性,(b)扩展模型的评分量表和部分信用版本在写作和口语的数据-模式拟合方面存在差异,(c)评分者严重程度和中心性估计彼此之间没有显著相关性,(d)中心性效应对考生的排名顺序有明显的影响。讨论的重点是对业绩评估中评级质量的分析和评估的影响。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
International Journal of Testing
International Journal of Testing SOCIAL SCIENCES, INTERDISCIPLINARY-
CiteScore
3.60
自引率
11.80%
发文量
13
期刊最新文献
Combining Mokken Scale Analysis with and rasch measurement theory to explore differences in measurement quality between subgroups Examining the construct validity of the MIDUS version of the Multidimensional Personality Questionnaire (MPQ) Where nonresponse is at its loudest: Cross-country and individual differences in item nonresponse across the PISA 2018 student questionnaire The choice between cognitive diagnosis and item response theory: A case study from medical education Beyond group comparisons: Accounting for intersectional sources of bias in international survey measures
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1