Argument-based validation of Academic Collocation Tests

IF 2.2 1区 文学 0 LANGUAGE & LINGUISTICS Language Testing Pub Date : 2023-10-21 DOI:10.1177/02655322231198499
Thi My Hang Nguyen, Peter Gu, Averil Coxhead
{"title":"Argument-based validation of Academic Collocation Tests","authors":"Thi My Hang Nguyen, Peter Gu, Averil Coxhead","doi":"10.1177/02655322231198499","DOIUrl":null,"url":null,"abstract":"Despite extensive research on assessing collocational knowledge, valid measures of academic collocations remain elusive. With the present study, we begin an argument-based approach to validate two Academic Collocation Tests (ACTs) that assess the ability to recognize and produce academic collocations (i.e., two-word units such as key element and well established) in written contexts. A total of 343 tertiary students completed a background questionnaire (including demographic information, IELTS scores, and learning experience), the ACTs, and the Vocabulary Size Test. Forty-four participants also took part in post-test interviews to share reflections on the tests and retook the ACTs verbally. The findings showed that the scoring inference based on analyses of test item characteristics, testing conditions, and scoring procedures was partially supported. The generalization inference, based on the consistency of item measures and testing occasions, was justified. The extrapolation inference, drawn from correlations with other measures and factors such as collocation frequency and learning experience, received partial support. Suggestions for increasing the degree of support for the inferences are discussed. The present study reinforces the value of validation research and generates the momentum for test developers to continue this practice with other vocabulary tests.","PeriodicalId":17928,"journal":{"name":"Language Testing","volume":"1 1","pages":"0"},"PeriodicalIF":2.2000,"publicationDate":"2023-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Language Testing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1177/02655322231198499","RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"0","JCRName":"LANGUAGE & LINGUISTICS","Score":null,"Total":0}
引用次数: 0

Abstract

Despite extensive research on assessing collocational knowledge, valid measures of academic collocations remain elusive. With the present study, we begin an argument-based approach to validate two Academic Collocation Tests (ACTs) that assess the ability to recognize and produce academic collocations (i.e., two-word units such as key element and well established) in written contexts. A total of 343 tertiary students completed a background questionnaire (including demographic information, IELTS scores, and learning experience), the ACTs, and the Vocabulary Size Test. Forty-four participants also took part in post-test interviews to share reflections on the tests and retook the ACTs verbally. The findings showed that the scoring inference based on analyses of test item characteristics, testing conditions, and scoring procedures was partially supported. The generalization inference, based on the consistency of item measures and testing occasions, was justified. The extrapolation inference, drawn from correlations with other measures and factors such as collocation frequency and learning experience, received partial support. Suggestions for increasing the degree of support for the inferences are discussed. The present study reinforces the value of validation research and generates the momentum for test developers to continue this practice with other vocabulary tests.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
基于论证的学术搭配测试验证
尽管在评估搭配知识方面进行了广泛的研究,但有效的学术搭配方法仍然难以捉摸。在本研究中,我们开始了一种基于论证的方法来验证两个学术搭配测试(act),这些测试评估了在书面语境中识别和产生学术搭配(即两个单词单位,如关键元素和良好建立)的能力。共有343名大学生完成了背景调查问卷(包括人口统计信息、雅思成绩和学习经历)、act和词汇量测试。44名参与者还参加了考试后的访谈,分享对考试的感想,并口头重新参加act考试。研究结果表明,基于测试项目特征、测试条件和评分程序分析的评分推理得到部分支持。基于项目测量和测试场合的一致性,可以证明归纳推理是正确的。从搭配频率和学习经验等其他度量和因素的相关性中得出的外推推理得到了部分支持。讨论了增加对推论支持程度的建议。目前的研究强化了验证研究的价值,并为测试开发人员在其他词汇测试中继续这种实践产生了动力。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
Language Testing
Language Testing Multiple-
CiteScore
6.70
自引率
9.80%
发文量
35
期刊介绍: Language Testing is a fully peer reviewed international journal that publishes original research and review articles on language testing and assessment. It provides a forum for the exchange of ideas and information between people working in the fields of first and second language testing and assessment. This includes researchers and practitioners in EFL and ESL testing, and assessment in child language acquisition and language pathology. In addition, special attention is focused on issues of testing theory, experimental investigations, and the following up of practical implications.
期刊最新文献
Can language test providers do more to support open science? A response to Winke Considerations to promote and accelerate Open Science: A response to Winke Evaluating the impact of nonverbal behavior on language ability ratings Sharing, collaborating, and building trust: How Open Science advances language testing Open Science in language assessment research contexts: A reply to Winke
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1