首页 > 最新文献

Language Testing最新文献

英文 中文
Measuring bilingual language dominance: An examination of the reliability of the Bilingual Language Profile 测量双语语言优势:双语语言概况可靠性的检验
IF 4.1 1区 文学 0 LANGUAGE & LINGUISTICS Pub Date : 2023-01-12 DOI: 10.1177/02655322221139162
Daniel J. Olson
Measuring language dominance, broadly defined as the relative strength of each of a bilingual’s two languages, remains a crucial methodological issue in bilingualism research. While various methods have been proposed, the Bilingual Language Profile (BLP) has been one of the most widely used tools for measuring language dominance. While previous studies have begun to establish its validity, the BLP has yet to be systematically evaluated with respect to reliability. Addressing this methodological gap, the current study examines the reliability of the BLP, employing a test–retest methodology with a large (N = 248), varied sample of Spanish–English bilinguals. Analysis focuses on the test–retest reliability of the overall dominance score, the dominant and non-dominant global language scores, and the subcomponent scores. The results demonstrate that the language dominance score produced by the BLP shows “excellent” levels of test–retest reliability. In addition, while some differences were found between the reliability of global language scores for the dominant and non-dominant languages, and for the different subcomponent scores, all components of the BLP display strong reliability. Taken as a whole, this study provides evidence for the reliability of BLP as a measure of bilingual language dominance.
衡量语言优势,广义上定义为双语者两种语言的相对强度,仍然是双语研究中的一个关键方法论问题。尽管已经提出了各种方法,但双语语言概况(BLP)一直是衡量语言优势的最广泛使用的工具之一。虽然先前的研究已经开始确定其有效性,但BLP的可靠性尚未得到系统评估。为了解决这一方法上的差距,本研究采用了一种测试-再测试方法,对BLP的可靠性进行了检验 = 248),西班牙语-英语双语者的不同样本。分析的重点是总体优势分数、优势和非优势全球语言分数以及子成分分数的测试-再测试可靠性。结果表明,BLP产生的语言优势分数显示出“优秀”的测试-再测试可靠性水平。此外,尽管优势语言和非优势语言的全局语言得分的可靠性以及不同的子成分得分之间存在一些差异,但BLP的所有成分都表现出较强的可靠性。总的来说,本研究为BLP作为衡量双语语言优势的可靠性提供了证据。
{"title":"Measuring bilingual language dominance: An examination of the reliability of the Bilingual Language Profile","authors":"Daniel J. Olson","doi":"10.1177/02655322221139162","DOIUrl":"https://doi.org/10.1177/02655322221139162","url":null,"abstract":"Measuring language dominance, broadly defined as the relative strength of each of a bilingual’s two languages, remains a crucial methodological issue in bilingualism research. While various methods have been proposed, the Bilingual Language Profile (BLP) has been one of the most widely used tools for measuring language dominance. While previous studies have begun to establish its validity, the BLP has yet to be systematically evaluated with respect to reliability. Addressing this methodological gap, the current study examines the reliability of the BLP, employing a test–retest methodology with a large (N = 248), varied sample of Spanish–English bilinguals. Analysis focuses on the test–retest reliability of the overall dominance score, the dominant and non-dominant global language scores, and the subcomponent scores. The results demonstrate that the language dominance score produced by the BLP shows “excellent” levels of test–retest reliability. In addition, while some differences were found between the reliability of global language scores for the dominant and non-dominant languages, and for the different subcomponent scores, all components of the BLP display strong reliability. Taken as a whole, this study provides evidence for the reliability of BLP as a measure of bilingual language dominance.","PeriodicalId":17928,"journal":{"name":"Language Testing","volume":"40 1","pages":"521 - 547"},"PeriodicalIF":4.1,"publicationDate":"2023-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45222587","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Book Review: Reflecting on the Common European Framework of Reference for Languages and its companion volume 书评:反思欧洲语言参考框架及其配套卷
IF 4.1 1区 文学 0 LANGUAGE & LINGUISTICS Pub Date : 2023-01-04 DOI: 10.1177/02655322221144788
Claudia Harsch
Aryadoust, V., Ng, L. Y., & Sayama, H. (2020). A comprehensive review of Rasch measurement in language assessment: Recommendations and guidelines for research. Language Testing, 38(1), 6–40. https://doi.org/10.1177/0265532220927487 Berrío, Á. I., Gómez-Benito, J., & Arias-Patiño, E. M. (2020). Developments and trends in research on methods of detecting differential item functioning. Educational Research Review, 31, Article 100340. https://doi.org/10.1016/j.edurev.2020.100340 Choi, Y.-J., & Asilkalkan, A. (2019). R packages for item response theory analysis: Descriptions and features. Measurement: Interdisciplinary Research and Perspectives, 17(3), 168–175. https://doi.org/10.1080/15366367.2019.1586404 Desjardins, C. D., & Bulut, O. (2018). Handbook of educational measurement and psychometrics using R. CRC Press. https://doi.org/10.1201/b20498 Linacre, J. M. (2022a). Facets computer program for many-facet Rasch measurement (Version 3.84.0). Winsteps. Linacre, J. M. (2022b). Winsteps® Rasch measurement computer program (Version 5.3.1). Winsteps. Luo, Y., & Jiao, H. (2017). Using the Stan program for Bayesian item response theory. Educational and Psychological Measurement, 78(3), 384–408. https://doi.org/10.1177/0013164417693666 Nicklin, C., & Vitta, J. P. (2022). Assessing Rasch measurement estimation methods across R packages with yes/no vocabulary test data. Language Testing, 39(4), 513–540. https://doi. org/10.1177/02655322211066822 Yildiz, H. (2021). IrtGUI: Item response theory analysis with a graphic user interface (R Package Version 0.2). https://CRAN.R-project.org/package=irtGUI
Aryadoust,V.,Ng,L.Y.和Sayama,H.(2020)。语言评估中Rasch测量的全面综述:研究建议和指南。语言测试,38(1),6-40。https://doi.org/10.1177/0265532220927487Berrío,Á。I.、Gómez Benito,J.和Arias Patiño,E.M.(2020)。差异项目功能检测方法的研究进展和趋势。《教育研究评论》,31,第100340条。https://doi.org/10.1016/j.edurev.2020.100340Choi,Y.-J.和Asilkalkan,A.(2019)。项目反应理论分析的R包:描述和特点。测量:跨学科研究与展望,17(3),168-175。https://doi.org/10.1080/15366367.2019.1586404Desjardins,C.D.和Bulut,O.(2018)。使用R.CRC出版社的教育测量和心理测量手册。https://doi.org/10.1201/b20498Linacre,J.M.(2022a)。用于多方面Rasch测量的Facets计算机程序(版本3.84.0)。Winsteps。Linacre,J.M.(2022b)。Winsteps®Rasch测量计算机程序(5.3.1版)。Winsteps。罗,音,焦,H.(2017)。将Stan程序用于贝叶斯项目反应理论。教育和心理测量,78(3),384–408。https://doi.org/10.1177/0013164417693666Nicklin,C.和Vitta,J.P.(2022)。使用是/否词汇测试数据评估R包的Rasch测量估计方法。语言测试,39(4),513–540。https://doi.org/10.1177/026553222211066822 Yildiz,H.(2021)。IrtGUI:具有图形用户界面的项目响应理论分析(R Package版本0.2)。https://CRAN.R-project.org/package=irtGUI
{"title":"Book Review: Reflecting on the Common European Framework of Reference for Languages and its companion volume","authors":"Claudia Harsch","doi":"10.1177/02655322221144788","DOIUrl":"https://doi.org/10.1177/02655322221144788","url":null,"abstract":"Aryadoust, V., Ng, L. Y., & Sayama, H. (2020). A comprehensive review of Rasch measurement in language assessment: Recommendations and guidelines for research. Language Testing, 38(1), 6–40. https://doi.org/10.1177/0265532220927487 Berrío, Á. I., Gómez-Benito, J., & Arias-Patiño, E. M. (2020). Developments and trends in research on methods of detecting differential item functioning. Educational Research Review, 31, Article 100340. https://doi.org/10.1016/j.edurev.2020.100340 Choi, Y.-J., & Asilkalkan, A. (2019). R packages for item response theory analysis: Descriptions and features. Measurement: Interdisciplinary Research and Perspectives, 17(3), 168–175. https://doi.org/10.1080/15366367.2019.1586404 Desjardins, C. D., & Bulut, O. (2018). Handbook of educational measurement and psychometrics using R. CRC Press. https://doi.org/10.1201/b20498 Linacre, J. M. (2022a). Facets computer program for many-facet Rasch measurement (Version 3.84.0). Winsteps. Linacre, J. M. (2022b). Winsteps® Rasch measurement computer program (Version 5.3.1). Winsteps. Luo, Y., & Jiao, H. (2017). Using the Stan program for Bayesian item response theory. Educational and Psychological Measurement, 78(3), 384–408. https://doi.org/10.1177/0013164417693666 Nicklin, C., & Vitta, J. P. (2022). Assessing Rasch measurement estimation methods across R packages with yes/no vocabulary test data. Language Testing, 39(4), 513–540. https://doi. org/10.1177/02655322211066822 Yildiz, H. (2021). IrtGUI: Item response theory analysis with a graphic user interface (R Package Version 0.2). https://CRAN.R-project.org/package=irtGUI","PeriodicalId":17928,"journal":{"name":"Language Testing","volume":"40 1","pages":"453 - 457"},"PeriodicalIF":4.1,"publicationDate":"2023-01-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48666199","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Construct validity and fairness of an operational listening test with World Englishes 用《世界英语》构建一个操作性听力测试的效度和公平性
IF 4.1 1区 文学 0 LANGUAGE & LINGUISTICS Pub Date : 2023-01-04 DOI: 10.1177/02655322221137869
H. Nishizawa
In this study, I investigate the construct validity and fairness pertaining to the use of a variety of Englishes in listening test input. I obtained data from a post-entry English language placement test administered at a public university in the United States. In addition to expectedly familiar American English, the test features Hawai’i, Filipino, and Indian English, which are expectedly less familiar to our test takers, but justified by the context. I used confirmatory factor analysis to test whether the category of unfamiliar English items formed a latent factor distinct from the other category of more familiar American English items. I used Rasch-based differential item functioning analysis to examine item biases as a function of examinees’ place of origin. The results from the confirmatory factor analysis suggested that the unfamiliar English items tapped into the same underlying construct as the familiar English items. The Rasch-based differential item functioning analysis revealed many instances of item bias among unfamiliar English items with higher proportions of item biases for items targeting narrow comprehension than broad comprehension. However, at the test level, the unfamiliar English items did not substantially influence raw total scores. These findings offer support for using a variety of Englishes in listening tests.
在本研究中,我调查了在听力测试输入中使用各种英语的结构有效性和公平性。我从美国一所公立大学的入学后英语语言安置测试中获得了数据。除了预期熟悉的美国英语外,该测试还包括夏威夷语、菲律宾语和印度语英语,这些英语对我们的考生来说不太熟悉,但根据上下文可以证明是合理的。我使用验证性因素分析来测试不熟悉的英语项目类别是否形成了与更熟悉的美国英语项目其他类别不同的潜在因素。我使用基于Rasch的差异项目功能分析来检验作为考生原籍函数的项目偏差。验证性因素分析的结果表明,不熟悉的英语项目与熟悉的英语项具有相同的基本结构。基于Rasch的差异项目功能分析显示,在不熟悉的英语项目中,许多项目存在偏见,针对狭义理解的项目的项目偏见比例高于广义理解。然而,在测试水平上,不熟悉的英语项目并没有对原始总分产生实质性影响。这些发现为在听力测试中使用各种英语提供了支持。
{"title":"Construct validity and fairness of an operational listening test with World Englishes","authors":"H. Nishizawa","doi":"10.1177/02655322221137869","DOIUrl":"https://doi.org/10.1177/02655322221137869","url":null,"abstract":"In this study, I investigate the construct validity and fairness pertaining to the use of a variety of Englishes in listening test input. I obtained data from a post-entry English language placement test administered at a public university in the United States. In addition to expectedly familiar American English, the test features Hawai’i, Filipino, and Indian English, which are expectedly less familiar to our test takers, but justified by the context. I used confirmatory factor analysis to test whether the category of unfamiliar English items formed a latent factor distinct from the other category of more familiar American English items. I used Rasch-based differential item functioning analysis to examine item biases as a function of examinees’ place of origin. The results from the confirmatory factor analysis suggested that the unfamiliar English items tapped into the same underlying construct as the familiar English items. The Rasch-based differential item functioning analysis revealed many instances of item bias among unfamiliar English items with higher proportions of item biases for items targeting narrow comprehension than broad comprehension. However, at the test level, the unfamiliar English items did not substantially influence raw total scores. These findings offer support for using a variety of Englishes in listening tests.","PeriodicalId":17928,"journal":{"name":"Language Testing","volume":"40 1","pages":"493 - 520"},"PeriodicalIF":4.1,"publicationDate":"2023-01-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47354788","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
The vexing problem of validity and the future of second language assessment 恼人的效度问题与第二语言评估的未来
IF 4.1 1区 文学 0 LANGUAGE & LINGUISTICS Pub Date : 2023-01-01 DOI: 10.1177/02655322221125204
Vahid Aryadoust
Construct validity and building validity arguments are some of the main challenges facing the language assessment community. The notion of construct validity and validity arguments arose from research in psychological assessment and developed into the gold standard of validation/validity research in language assessment. At a theoretical level, construct validity and validity arguments conflate the scientific reasoning in assessment and policy matters of ethics. Thus, a test validator is expected to simultaneously serve the role of conducting scientific research and examining the consequential basis of assessments. I contend that validity investigations should be decoupled from the ethical and social aspects of assessment. In addition, the near-exclusive focus of empirical construct validity research on cognitive processing has not resulted in sufficient accuracy and replicability in predicting test takers’ performance in real language use domains. Accordingly, I underscore the significance of prediction in validation, in contrast to explanation, and propose that the question to ask might not so much be about what a test measures as what type of methods and tools can better generate language use profiles. Finally, I suggest that interdisciplinary alliances with cognitive and computational neuroscience and artificial intelligence (AI) fields should be forged to meet the demands of language assessment in the 21st century.
构建效度和建立效度论证是语言评估界面临的一些主要挑战。构念效度和效度论证的概念起源于心理评估研究,并发展成为语言评估效度和效度研究的金标准。在理论层面上,构建有效性和有效性论证将伦理评估和政策问题中的科学推理混为一谈。因此,测试验证者被期望同时服务于进行科学研究和检查评估的结果基础的角色。我认为,有效性调查应该与评估的伦理和社会方面脱钩。此外,经验构念效度研究几乎只关注认知加工,在预测考生在真实语言使用领域的表现方面缺乏足够的准确性和可复制性。因此,我强调了预测在验证中的重要性,而不是解释,并提出要问的问题可能不是关于测试测量什么,而是什么类型的方法和工具可以更好地生成语言使用概况。最后,我建议应该与认知和计算神经科学以及人工智能(AI)领域建立跨学科联盟,以满足21世纪语言评估的需求。
{"title":"The vexing problem of validity and the future of second language assessment","authors":"Vahid Aryadoust","doi":"10.1177/02655322221125204","DOIUrl":"https://doi.org/10.1177/02655322221125204","url":null,"abstract":"Construct validity and building validity arguments are some of the main challenges facing the language assessment community. The notion of construct validity and validity arguments arose from research in psychological assessment and developed into the gold standard of validation/validity research in language assessment. At a theoretical level, construct validity and validity arguments conflate the scientific reasoning in assessment and policy matters of ethics. Thus, a test validator is expected to simultaneously serve the role of conducting scientific research and examining the consequential basis of assessments. I contend that validity investigations should be decoupled from the ethical and social aspects of assessment. In addition, the near-exclusive focus of empirical construct validity research on cognitive processing has not resulted in sufficient accuracy and replicability in predicting test takers’ performance in real language use domains. Accordingly, I underscore the significance of prediction in validation, in contrast to explanation, and propose that the question to ask might not so much be about what a test measures as what type of methods and tools can better generate language use profiles. Finally, I suggest that interdisciplinary alliances with cognitive and computational neuroscience and artificial intelligence (AI) fields should be forged to meet the demands of language assessment in the 21st century.","PeriodicalId":17928,"journal":{"name":"Language Testing","volume":"40 1","pages":"8 - 14"},"PeriodicalIF":4.1,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48647839","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Test design and validity evidence of interactive speaking assessment in the era of emerging technologies 新兴技术时代交互式口语评价的测试设计与效度证据
IF 4.1 1区 文学 0 LANGUAGE & LINGUISTICS Pub Date : 2023-01-01 DOI: 10.1177/02655322221126606
Soo Jung Youn
As access to smartphones and emerging technologies has become ubiquitous in our daily lives and in language learning, technology-mediated social interaction has become common in teaching and assessing L2 speaking. The changing ecology of L2 spoken interaction provides language educators and testers with opportunities for renewed test design and the gathering of context-sensitive validity evidence of interactive speaking assessment. First, I review the current research on interactive speaking assessment focusing on commonly used test formats and types of validity evidence. Second, I discuss recent research that reports the use of artificial intelligence and technologies in teaching and assessing speaking in order to understand how and what evidence of interactive speaking is elicited. Based on the discussion, I argue that it is critical to identify what features of interactive speaking are elicited depending on the types of technology-mediated interaction for valid assessment decisions in relation to intended uses. I further discuss opportunities and challenges for future research on test design and eliciting validity evidence of interactive speaking using technology-mediated interaction.
随着智能手机和新兴技术在我们的日常生活和语言学习中无处不在,技术中介的社交互动在二语教学和评估中变得普遍。二语口语互动生态的变化为语言教育者和测试者提供了重新设计测试和收集互动口语评估的上下文敏感有效性证据的机会。首先,我回顾了目前对交互式口语评估的研究,重点是常用的测试格式和有效性证据类型。其次,我讨论了最近的一项研究,该研究报告了人工智能和技术在口语教学和评估中的应用,以了解互动口语是如何以及什么样的证据被引出的。基于讨论,我认为,至关重要的是,要确定互动演讲的哪些特征取决于技术中介的互动类型,才能做出与预期用途相关的有效评估决策。我进一步讨论了未来研究测试设计和利用技术中介互动获取互动演讲有效性证据的机会和挑战。
{"title":"Test design and validity evidence of interactive speaking assessment in the era of emerging technologies","authors":"Soo Jung Youn","doi":"10.1177/02655322221126606","DOIUrl":"https://doi.org/10.1177/02655322221126606","url":null,"abstract":"As access to smartphones and emerging technologies has become ubiquitous in our daily lives and in language learning, technology-mediated social interaction has become common in teaching and assessing L2 speaking. The changing ecology of L2 spoken interaction provides language educators and testers with opportunities for renewed test design and the gathering of context-sensitive validity evidence of interactive speaking assessment. First, I review the current research on interactive speaking assessment focusing on commonly used test formats and types of validity evidence. Second, I discuss recent research that reports the use of artificial intelligence and technologies in teaching and assessing speaking in order to understand how and what evidence of interactive speaking is elicited. Based on the discussion, I argue that it is critical to identify what features of interactive speaking are elicited depending on the types of technology-mediated interaction for valid assessment decisions in relation to intended uses. I further discuss opportunities and challenges for future research on test design and eliciting validity evidence of interactive speaking using technology-mediated interaction.","PeriodicalId":17928,"journal":{"name":"Language Testing","volume":"40 1","pages":"54 - 60"},"PeriodicalIF":4.1,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44244212","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Forty years of Language Testing, and the changing paths of publishing 四十年的语言测试,以及出版路径的变化
IF 4.1 1区 文学 0 LANGUAGE & LINGUISTICS Pub Date : 2023-01-01 DOI: 10.1177/02655322221136802
Paula M. Winke
{"title":"Forty years of Language Testing, and the changing paths of publishing","authors":"Paula M. Winke","doi":"10.1177/02655322221136802","DOIUrl":"https://doi.org/10.1177/02655322221136802","url":null,"abstract":"","PeriodicalId":17928,"journal":{"name":"Language Testing","volume":"40 1","pages":"3 - 7"},"PeriodicalIF":4.1,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46083663","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Epilogue—Note from an outgoing editor 结语——一位即将离任的编辑的注释
IF 4.1 1区 文学 0 LANGUAGE & LINGUISTICS Pub Date : 2023-01-01 DOI: 10.1177/02655322221138339
L. Harding
In this brief epilogue, outgoing editor Luke Harding reflects on his time as editor and considers the future Language Testing.
在这篇简短的结语中,即将离任的编辑Luke Harding回顾了他作为编辑的时光,并展望了未来的语言测试。
{"title":"Epilogue—Note from an outgoing editor","authors":"L. Harding","doi":"10.1177/02655322221138339","DOIUrl":"https://doi.org/10.1177/02655322221138339","url":null,"abstract":"In this brief epilogue, outgoing editor Luke Harding reflects on his time as editor and considers the future Language Testing.","PeriodicalId":17928,"journal":{"name":"Language Testing","volume":"40 1","pages":"204 - 205"},"PeriodicalIF":4.1,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45426305","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Towards a new sophistication in vocabulary assessment 迈向词汇评估的新境界
IF 4.1 1区 文学 0 LANGUAGE & LINGUISTICS Pub Date : 2023-01-01 DOI: 10.1177/02655322221125698
J. Read
Published work on vocabulary assessment has grown substantially in the last 10 years, but it is still somewhat outside the mainstream of the field. There has been a recent call for those developing vocabulary tests to apply professional standards to their work, especially in validating their instruments for specified purposes before releasing them for widespread use. A great deal of work on vocabulary assessment can be seen in terms of the somewhat problematic distinction between breadth and depth of vocabulary knowledge. Breadth refers to assessing vocabulary size, based on a large sample of words from a frequency list. New research is raising questions about the suitability of word frequency norms derived from large corpora, the choice of the word family as the unit of analysis, the selection of appropriate test formats, and the role of guessing in test-taker performance. Depth of knowledge goes beyond the basic form-meaning link to consider other aspects of word knowledge. The concept of word association has played a dominant role in the design of such tests, but there is a need to create test formats to assess knowledge of word parts as well as a range of multi-word items apart from collocation.
在过去10年中,已发表的词汇评估工作大幅增长 多年来,但它仍然有点脱离了该领域的主流。最近有人呼吁那些开发词汇测试的人将专业标准应用于他们的工作,特别是在将其工具发布用于广泛使用之前,将其用于特定目的进行验证。大量关于词汇评估的工作可以从词汇知识的广度和深度之间存在一些问题的区别来看。广度是指根据频率列表中的大量单词样本来评估词汇量。新的研究提出了关于大型语料库中词频规范的适用性、选择词族作为分析单位、选择合适的测试格式以及猜测在考生表现中的作用等问题。知识的深度超越了基本的形式意义联系,考虑到单词知识的其他方面。单词联想的概念在此类测试的设计中发挥了主导作用,但有必要创建测试格式来评估单词部分以及除搭配外的一系列多单词项目的知识。
{"title":"Towards a new sophistication in vocabulary assessment","authors":"J. Read","doi":"10.1177/02655322221125698","DOIUrl":"https://doi.org/10.1177/02655322221125698","url":null,"abstract":"Published work on vocabulary assessment has grown substantially in the last 10 years, but it is still somewhat outside the mainstream of the field. There has been a recent call for those developing vocabulary tests to apply professional standards to their work, especially in validating their instruments for specified purposes before releasing them for widespread use. A great deal of work on vocabulary assessment can be seen in terms of the somewhat problematic distinction between breadth and depth of vocabulary knowledge. Breadth refers to assessing vocabulary size, based on a large sample of words from a frequency list. New research is raising questions about the suitability of word frequency norms derived from large corpora, the choice of the word family as the unit of analysis, the selection of appropriate test formats, and the role of guessing in test-taker performance. Depth of knowledge goes beyond the basic form-meaning link to consider other aspects of word knowledge. The concept of word association has played a dominant role in the design of such tests, but there is a need to create test formats to assess knowledge of word parts as well as a range of multi-word items apart from collocation.","PeriodicalId":17928,"journal":{"name":"Language Testing","volume":"40 1","pages":"40 - 46"},"PeriodicalIF":4.1,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48362352","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Future challenges and opportunities in language testing and assessment: Basic questions and principles at the forefront 语言测试与评估的未来挑战与机遇:前沿的基本问题与原则
IF 4.1 1区 文学 0 LANGUAGE & LINGUISTICS Pub Date : 2023-01-01 DOI: 10.1177/02655322221127896
Tineke Brunfaut
In this invited Viewpoint on the occasion of the 40th anniversary of the journal Language Testing, I argue that at the core of future challenges and opportunities for the field—both in scholarly and operational respects—remain basic questions and principles in language testing and assessment. Despite the high levels of sophistication of issues looked into, and methodological and operational solutions found, outstanding concerns still amount to: what are we testing, how are we testing, and why are we testing? Guided by these questions, I call for more thorough and adequate language use domain definitions (and a suitable broadening of research and testing methodologies to determine these), more comprehensive operationalizations of these domain definitions (especially in the context of technology in language testing), and deeper considerations of test purposes/uses and of their connections with domain definitions. To achieve this, I maintain that the field needs to continue investing in the topics of validation, ethics, and language assessment literacy, and engaging with broader fields of enquiry such as (applied) linguistics. I also encourage a more synthetic look at the existing knowledge base in order to build on this, and further diversification of voices in language testing and assessment research and practice.
在《语言测试》杂志出版40周年之际,我在这篇受邀发表的《观点》杂志上认为,该领域未来的挑战和机遇的核心——无论是在学术方面还是在操作方面——仍然是语言测试和评估的基本问题和原则。尽管调查的问题高度复杂,并找到了方法和操作解决方案,但悬而未决的问题仍然是:我们在测试什么,我们如何测试,以及我们为什么要测试?在这些问题的指导下,我呼吁更彻底和充分的语言使用领域定义(并适当扩大研究和测试方法以确定这些定义),更全面地操作这些领域定义(特别是在语言测试技术的背景下),以及对测试目的/用途及其与领域定义的联系的更深入考虑。为了实现这一目标,我认为该领域需要继续投资于验证、伦理和语言评估素养等主题,并参与更广泛的研究领域,如(应用)语言学。我还鼓励对现有的知识库进行更全面的研究,以便在此基础上再接再厉,并在语言测试、评估研究和实践中进一步多样化。
{"title":"Future challenges and opportunities in language testing and assessment: Basic questions and principles at the forefront","authors":"Tineke Brunfaut","doi":"10.1177/02655322221127896","DOIUrl":"https://doi.org/10.1177/02655322221127896","url":null,"abstract":"In this invited Viewpoint on the occasion of the 40th anniversary of the journal Language Testing, I argue that at the core of future challenges and opportunities for the field—both in scholarly and operational respects—remain basic questions and principles in language testing and assessment. Despite the high levels of sophistication of issues looked into, and methodological and operational solutions found, outstanding concerns still amount to: what are we testing, how are we testing, and why are we testing? Guided by these questions, I call for more thorough and adequate language use domain definitions (and a suitable broadening of research and testing methodologies to determine these), more comprehensive operationalizations of these domain definitions (especially in the context of technology in language testing), and deeper considerations of test purposes/uses and of their connections with domain definitions. To achieve this, I maintain that the field needs to continue investing in the topics of validation, ethics, and language assessment literacy, and engaging with broader fields of enquiry such as (applied) linguistics. I also encourage a more synthetic look at the existing knowledge base in order to build on this, and further diversification of voices in language testing and assessment research and practice.","PeriodicalId":17928,"journal":{"name":"Language Testing","volume":"40 1","pages":"15 - 23"},"PeriodicalIF":4.1,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43816042","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Administration, labor, and love 行政、劳动和爱
IF 4.1 1区 文学 0 LANGUAGE & LINGUISTICS Pub Date : 2023-01-01 DOI: 10.1177/02655322221127365
A. Ginther
Great opportunities for language testing practitioners are enabled through language program administration. Local language tests lend themselves to multiple purposes—for placement and diagnosis, as a means of tracking progress, and as a contribution to program evaluation and revision. Administrative choices, especially those involving a test, are strategic and can be used to transform a program’s identity and effectiveness over time.
语言项目管理为语言测试从业者提供了巨大的机会。当地语言测试有多种用途——用于安置和诊断,作为跟踪进度的一种手段,以及作为对项目评估和修订的贡献。管理方面的选择,尤其是那些涉及考试的选择,是战略性的,可以用来随着时间的推移改变一个项目的身份和有效性。
{"title":"Administration, labor, and love","authors":"A. Ginther","doi":"10.1177/02655322221127365","DOIUrl":"https://doi.org/10.1177/02655322221127365","url":null,"abstract":"Great opportunities for language testing practitioners are enabled through language program administration. Local language tests lend themselves to multiple purposes—for placement and diagnosis, as a means of tracking progress, and as a contribution to program evaluation and revision. Administrative choices, especially those involving a test, are strategic and can be used to transform a program’s identity and effectiveness over time.","PeriodicalId":17928,"journal":{"name":"Language Testing","volume":"40 1","pages":"31 - 39"},"PeriodicalIF":4.1,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43447832","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Language Testing
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1