A Latent Class IRT Approach to Defining and Measuring Language Proficiency.

Tammy D Tolar, D. Francis, Paulina A. Kulesz, K. Stuebing
{"title":"A Latent Class IRT Approach to Defining and Measuring Language Proficiency.","authors":"Tammy D Tolar, D. Francis, Paulina A. Kulesz, K. Stuebing","doi":"10.59863/ycua8620","DOIUrl":null,"url":null,"abstract":"English language learner (EL) status has high stakes implications for determining when and how ELs should be evaluated for academic achievement. In the US, students designated as English learners are assessed annually for English language proficiency (ELP), a complex construct whose conceptualization has evolved in recent years to reflect more precisely the language demands of content area achievement as reflected in the standards of individual states and state language assessment consortia, such as WIDA and ELPA21. The goal of this paper was to examine the possible role for and utility of using content area assessments to validate language proficiency mastery criteria. Specifically, we applied mixture item response models to identify two classes of EL students: (1) ELs for whom English language arts and math achievement test items have similar difficulty and discrimination parameters as they do for non-ELs and (2) ELs for whom the test items function differently. We used latent class IRT methods to identify the two groups of ELs and to evaluate the effects of different subscales of ELP (reading, writing, listening, and speaking) on group membership. Only reading and writing were significant predictors of class membership. Cut-scores based on summary scores of ELP were imperfect predictors of class membership and indicated the need for finer differentiation within the top proficiency category. This study demonstrates the importance of linking definitions of ELP to the context for which ELP is used and suggests the possible value of psychometric analyses when language proficiency standards are linked to the language requirements for content area achievement.","PeriodicalId":72586,"journal":{"name":"Chinese/English journal of educational measurement and evaluation","volume":"29 1","pages":"49-73"},"PeriodicalIF":0.0000,"publicationDate":"2021-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Chinese/English journal of educational measurement and evaluation","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.59863/ycua8620","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

English language learner (EL) status has high stakes implications for determining when and how ELs should be evaluated for academic achievement. In the US, students designated as English learners are assessed annually for English language proficiency (ELP), a complex construct whose conceptualization has evolved in recent years to reflect more precisely the language demands of content area achievement as reflected in the standards of individual states and state language assessment consortia, such as WIDA and ELPA21. The goal of this paper was to examine the possible role for and utility of using content area assessments to validate language proficiency mastery criteria. Specifically, we applied mixture item response models to identify two classes of EL students: (1) ELs for whom English language arts and math achievement test items have similar difficulty and discrimination parameters as they do for non-ELs and (2) ELs for whom the test items function differently. We used latent class IRT methods to identify the two groups of ELs and to evaluate the effects of different subscales of ELP (reading, writing, listening, and speaking) on group membership. Only reading and writing were significant predictors of class membership. Cut-scores based on summary scores of ELP were imperfect predictors of class membership and indicated the need for finer differentiation within the top proficiency category. This study demonstrates the importance of linking definitions of ELP to the context for which ELP is used and suggests the possible value of psychometric analyses when language proficiency standards are linked to the language requirements for content area achievement.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
定义和测量语言能力的潜在类别IRT方法。
英语学习者(EL)的状态对于决定何时以及如何评估EL的学业成就具有重大意义。在美国,被指定为英语学习者的学生每年都要接受英语语言能力(ELP)评估,这是一个复杂的结构,其概念近年来不断发展,以更准确地反映各州和各州语言评估联盟(如WIDA和ELPA21)的标准中所反映的内容领域成就的语言要求。本文的目的是检查使用内容区域评估来验证语言熟练掌握标准的可能作用和效用。具体来说,我们应用混合项目反应模型来识别两类英语学生:(1)英语语言艺术和数学成就测试项目的难度和辨别参数与非英语学生相似的英语学生;(2)测试项目的功能不同的英语学生。我们使用潜在类别IRT方法来识别两组ELP,并评估ELP不同分量表(阅读、写作、听力和口语)对群体成员的影响。只有阅读和写作是班级成员的重要预测因素。基于ELP总结分数的Cut-scores是不完美的班级成员预测,并表明需要在最高熟练程度类别中进行更精细的区分。本研究证明了将语言能力水平定义与使用语言能力水平的语境联系起来的重要性,并表明当语言能力标准与内容领域成就的语言要求联系起来时,心理测量分析的可能价值。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Non-Parametric CD-CAT Item Selection Strategy and Termination Rules Based on Binary Search Algorithm 基于二分搜索算法构建的非参数CD-CAT选题策略及终止规则 An Efficient Non-parametric Item Selection Method for Polytomous Scoring CD-CAT ETS Skills Taxonomy 一种高效的且适用于多级计分CD-CAT非参数选题方法
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1