首页 > 最新文献

Language Testing最新文献

英文 中文
Practical considerations when building concordances between English tests 在英语测试之间建立一致性时的实际考虑
1区 文学 Q1 Arts and Humanities Pub Date : 2023-09-23 DOI: 10.1177/02655322231195027
Ramsey L. Cardwell, Steven W. Nydick, J.R. Lockwood, Alina A. von Davier
Applicants must often demonstrate adequate English proficiency when applying to postsecondary institutions by taking an English language proficiency test, such as the TOEFL iBT, IELTS Academic, or Duolingo English Test (DET). Concordance tables aim to provide equivalent scores across multiple assessments, helping admissions officers to make fair decisions regardless of the test that an applicant took. We present our approaches to addressing practical (i.e., data collection and analysis) challenges in the context of building concordance tables between overall scores from the DET and those from the TOEFL iBT and IELTS Academic tests. We summarize a novel method for combining self-reported and official scores to meet recommended minimum sample sizes for concordance studies. We also evaluate sensitivity of estimated concordances to choices about how to (a) weight the observed data to the target population; (b) define outliers; (c) select appropriate pairs of test scores for repeat test takers; and (d) compute equating functions between pairs of scores. We find that estimated concordance functions are largely robust to different combinations of these choices in the regions of the proficiency distribution most relevant to admissions decisions. We discuss implications of our results for both test users and language testers.
申请人在申请高等教育机构时,必须通过参加英语语言能力测试,如托福网考、雅思学术考试或多邻国英语测试(DET),证明自己有足够的英语水平。一致性表的目的是在多个评估中提供相同的分数,帮助招生人员做出公平的决定,而不管申请人参加的是哪种考试。我们提出了解决实际(即数据收集和分析)挑战的方法,以建立DET总分与托福网考和雅思学术考试总分之间的一致性表。我们总结了一种结合自我报告和官方分数的新方法,以满足一致性研究推荐的最小样本量。我们还评估了估计的一致性对如何(a)将观察到的数据加权到目标人群的选择的敏感性;(b)定义异常值;(c)为重复考生选择合适的考试分数对;(d)计算分数对之间的相等函数。我们发现,在与录取决定最相关的熟练度分布区域中,估计的一致性函数对这些选择的不同组合具有很大的鲁棒性。我们讨论了我们的结果对测试用户和语言测试人员的影响。
{"title":"Practical considerations when building concordances between English tests","authors":"Ramsey L. Cardwell, Steven W. Nydick, J.R. Lockwood, Alina A. von Davier","doi":"10.1177/02655322231195027","DOIUrl":"https://doi.org/10.1177/02655322231195027","url":null,"abstract":"Applicants must often demonstrate adequate English proficiency when applying to postsecondary institutions by taking an English language proficiency test, such as the TOEFL iBT, IELTS Academic, or Duolingo English Test (DET). Concordance tables aim to provide equivalent scores across multiple assessments, helping admissions officers to make fair decisions regardless of the test that an applicant took. We present our approaches to addressing practical (i.e., data collection and analysis) challenges in the context of building concordance tables between overall scores from the DET and those from the TOEFL iBT and IELTS Academic tests. We summarize a novel method for combining self-reported and official scores to meet recommended minimum sample sizes for concordance studies. We also evaluate sensitivity of estimated concordances to choices about how to (a) weight the observed data to the target population; (b) define outliers; (c) select appropriate pairs of test scores for repeat test takers; and (d) compute equating functions between pairs of scores. We find that estimated concordance functions are largely robust to different combinations of these choices in the regions of the proficiency distribution most relevant to admissions decisions. We discuss implications of our results for both test users and language testers.","PeriodicalId":17928,"journal":{"name":"Language Testing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135967326","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Book review: C. A. Chapelle and E. Voss (Eds.), Validity Argument in Language Testing: Case Studies of Validation Research 书评:C.A.Chapelle和E.Voss(编辑),《语言测试中的有效性争论:验证研究的案例研究》
IF 4.1 1区 文学 Q1 Arts and Humanities Pub Date : 2023-08-17 DOI: 10.1177/02655322231193705
Yasuyo Sawaki
{"title":"Book review: C. A. Chapelle and E. Voss (Eds.), Validity Argument in Language Testing: Case Studies of Validation Research","authors":"Yasuyo Sawaki","doi":"10.1177/02655322231193705","DOIUrl":"https://doi.org/10.1177/02655322231193705","url":null,"abstract":"","PeriodicalId":17928,"journal":{"name":"Language Testing","volume":null,"pages":null},"PeriodicalIF":4.1,"publicationDate":"2023-08-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47509742","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Language testers and their place in the policy web 语言测试人员及其在政策网络中的地位
IF 4.1 1区 文学 Q1 Arts and Humanities Pub Date : 2023-08-17 DOI: 10.1177/02655322231191133
Laura Schildt, B. Deygers, A. Weideman
In the context of policy-driven language testing for citizenship, a growing body of research examines the political justifications and ethical implications of language requirements and test use. However, virtually no studies have looked at the role that language testers play in the evolution of language requirements. Critical gaps remain in our understanding of language testers’ first-hand experiences interacting with policymakers and how they perceive the use of tests in public policy. We examined these questions using an exploratory design and semi-structured interviews with 28 test executives representing 25 exam boards in 20 European countries. The interviews were transcribed and double coded in NVivo (weighted kappa = .83) using a priori and inductive coding. We used a horizontal analysis to evaluate responses by participant and a vertical analysis to identify between-case themes. Findings indicate that language testers may benefit from policy literacy to form part of policy webs wherein they can influence instrumental decisions concerning language in migration policy.
在政策驱动的公民语言测试的背景下,越来越多的研究考察了语言要求和测试使用的政治理由和道德含义。然而,几乎没有任何研究关注语言测试人员在语言需求演变中所扮演的角色。我们对语言测试人员与政策制定者互动的第一手经验以及他们如何看待公共政策中测试的使用仍存在重大差距。我们采用探索性设计和半结构化访谈的方式,对来自20个欧洲国家的25个考试委员会的28名考试主管进行了研究。访谈被转录并用NVivo进行双重编码(加权kappa = .83)。我们使用横向分析来评估参与者的反应,使用纵向分析来确定案例主题。研究结果表明,语言测试人员可能会从政策素养中受益,成为政策网络的一部分,在政策网络中,他们可以影响移民政策中有关语言的工具性决策。
{"title":"Language testers and their place in the policy web","authors":"Laura Schildt, B. Deygers, A. Weideman","doi":"10.1177/02655322231191133","DOIUrl":"https://doi.org/10.1177/02655322231191133","url":null,"abstract":"In the context of policy-driven language testing for citizenship, a growing body of research examines the political justifications and ethical implications of language requirements and test use. However, virtually no studies have looked at the role that language testers play in the evolution of language requirements. Critical gaps remain in our understanding of language testers’ first-hand experiences interacting with policymakers and how they perceive the use of tests in public policy. We examined these questions using an exploratory design and semi-structured interviews with 28 test executives representing 25 exam boards in 20 European countries. The interviews were transcribed and double coded in NVivo (weighted kappa = .83) using a priori and inductive coding. We used a horizontal analysis to evaluate responses by participant and a vertical analysis to identify between-case themes. Findings indicate that language testers may benefit from policy literacy to form part of policy webs wherein they can influence instrumental decisions concerning language in migration policy.","PeriodicalId":17928,"journal":{"name":"Language Testing","volume":null,"pages":null},"PeriodicalIF":4.1,"publicationDate":"2023-08-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46533450","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Assessing the content quality of essays in content and language integrated learning: Exploring the construct from subject specialists’ perspectives 评估内容和语言综合学习中论文的内容质量:从学科专家的角度探索结构
IF 4.1 1区 文学 Q1 Arts and Humanities Pub Date : 2023-08-08 DOI: 10.1177/02655322231190058
Takanori Sato
Assessing the content of learners’ compositions is a common practice in second language (L2) writing assessment. However, the construct definition of content in L2 writing assessment potentially underrepresents the target competence in content and language integrated learning (CLIL), which aims to foster not only L2 proficiency but also critical thinking skills and subject knowledge. This study aims to conceptualize the construct of content in CLIL by exploring subject specialists’ perspectives on essays’ content quality in a CLIL context. Eleven researchers of English as a lingua franca (ELF) rated the content quality of research-based argumentative essays on ELF submitted in a CLIL course and produced think-aloud protocols. This study explored some essay features that have not been considered relevant in language assessment but are essential in the CLIL context, including the accuracy of the content, presence and quality of research, and presence of elements required in academic essays. Furthermore, the findings of this study confirmed that the components of content often addressed in language assessment (e.g., elaboration and logicality) are pertinent to writing assessment in CLIL. The manner in which subject specialists construe the content quality of essays on their specialized discipline can deepen the current understanding of content in CLIL.
评估学习者作文的内容是第二语言写作评估中的一种常见做法。然而,二语写作评估中内容的结构定义可能低估了内容和语言综合学习(CLIL)的目标能力,该学习不仅旨在培养二语水平,还旨在培养批判性思维技能和学科知识。本研究旨在通过探索学科专家对CLIL背景下文章内容质量的看法,对CLIL中的内容结构进行概念化。11名英语通用语(ELF)研究人员对CLIL课程中提交的基于研究的ELF议论文的内容质量进行了评分,并制定了大声思考的协议。本研究探讨了一些在语言评估中不被认为相关但在CLIL背景下至关重要的论文特征,包括内容的准确性、研究的存在性和质量,以及学术论文中所需元素的存在。此外,本研究的结果证实,语言评估中经常涉及的内容组成部分(如阐述和逻辑性)与CLIL中的写作评估有关。学科专家根据其专业学科对论文内容质量的理解可以加深当前对CLIL内容的理解。
{"title":"Assessing the content quality of essays in content and language integrated learning: Exploring the construct from subject specialists’ perspectives","authors":"Takanori Sato","doi":"10.1177/02655322231190058","DOIUrl":"https://doi.org/10.1177/02655322231190058","url":null,"abstract":"Assessing the content of learners’ compositions is a common practice in second language (L2) writing assessment. However, the construct definition of content in L2 writing assessment potentially underrepresents the target competence in content and language integrated learning (CLIL), which aims to foster not only L2 proficiency but also critical thinking skills and subject knowledge. This study aims to conceptualize the construct of content in CLIL by exploring subject specialists’ perspectives on essays’ content quality in a CLIL context. Eleven researchers of English as a lingua franca (ELF) rated the content quality of research-based argumentative essays on ELF submitted in a CLIL course and produced think-aloud protocols. This study explored some essay features that have not been considered relevant in language assessment but are essential in the CLIL context, including the accuracy of the content, presence and quality of research, and presence of elements required in academic essays. Furthermore, the findings of this study confirmed that the components of content often addressed in language assessment (e.g., elaboration and logicality) are pertinent to writing assessment in CLIL. The manner in which subject specialists construe the content quality of essays on their specialized discipline can deepen the current understanding of content in CLIL.","PeriodicalId":17928,"journal":{"name":"Language Testing","volume":null,"pages":null},"PeriodicalIF":4.1,"publicationDate":"2023-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48912792","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Test review: High-stakes English language proficiency tests—Enquiry, resit, and retake policies 考试回顾:高风险的英语语言能力测试——查询、补考和重考政策
IF 4.1 1区 文学 Q1 Arts and Humanities Pub Date : 2023-07-25 DOI: 10.1177/02655322231186706
William S. Pearson
Many candidates undertaking high-stakes English language proficiency tests for academic enrolment do not achieve the results they need for reasons including linguistic unreadiness, test unpreparedness, illness, an unfavourable configuration of tasks, or administrative and marking errors. Owing to the importance of meeting goals or out of a belief that original test performance was satisfactory, some individuals query their results, while others go on to retake the test, perhaps on multiple occasions. This article critically reviews the policies of eight well-known, on-demand gatekeeping English language tests, describing the systems adopted by language assessment organisations to regulate results enquiries, candidates resitting (components of) a test where performance fell short of requirements, and repeat test-taking. It was found that all providers institute clear mechanisms through which candidates can query their results, with notable variations exhibited in procedures, costs, restrictions, outcomes, and how policies are communicated to test-takers. Test resit options are scarce, while organisations enact few restrictions on test retakes in the form of mandatory waiting times and cautionary advice. The implications for language assessment organisations are discussed.
许多参加学术入学的高风险英语语言能力测试的考生没有达到他们需要的结果,原因包括语言准备不足,考试准备不足,疾病,不利的任务配置,或管理和评分错误。由于达到目标的重要性,或者出于最初的测试表现令人满意的信念,一些人查询他们的结果,而另一些人继续重新参加测试,可能在多个场合。本文批判性地回顾了八个著名的按需守门英语语言测试的政策,描述了语言评估机构采用的系统,以规范结果查询,考生重新参加(组成部分)考试,其中表现达不到要求,以及重复考试。研究发现,所有考试机构都建立了明确的机制,考生可以通过该机制查询考试结果,在程序、成本、限制、结果以及政策如何传达给考生方面表现出明显的差异。重新参加考试的选择很少,而组织对重新参加考试的限制很少,只是强制等待时间和警告性建议。本文讨论了这对语言评估机构的影响。
{"title":"Test review: High-stakes English language proficiency tests—Enquiry, resit, and retake policies","authors":"William S. Pearson","doi":"10.1177/02655322231186706","DOIUrl":"https://doi.org/10.1177/02655322231186706","url":null,"abstract":"Many candidates undertaking high-stakes English language proficiency tests for academic enrolment do not achieve the results they need for reasons including linguistic unreadiness, test unpreparedness, illness, an unfavourable configuration of tasks, or administrative and marking errors. Owing to the importance of meeting goals or out of a belief that original test performance was satisfactory, some individuals query their results, while others go on to retake the test, perhaps on multiple occasions. This article critically reviews the policies of eight well-known, on-demand gatekeeping English language tests, describing the systems adopted by language assessment organisations to regulate results enquiries, candidates resitting (components of) a test where performance fell short of requirements, and repeat test-taking. It was found that all providers institute clear mechanisms through which candidates can query their results, with notable variations exhibited in procedures, costs, restrictions, outcomes, and how policies are communicated to test-takers. Test resit options are scarce, while organisations enact few restrictions on test retakes in the form of mandatory waiting times and cautionary advice. The implications for language assessment organisations are discussed.","PeriodicalId":17928,"journal":{"name":"Language Testing","volume":null,"pages":null},"PeriodicalIF":4.1,"publicationDate":"2023-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41921024","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Book review: K. Sadeghi (Ed.), Technology-Assisted Language Assessment in Diverse Contexts: Lessons from the Transition to Online Testing During Covid-19 书评:K.Sadeghi(编辑),《不同背景下的技术辅助语言评估:新冠肺炎期间向在线测试过渡的经验教训》
IF 4.1 1区 文学 Q1 Arts and Humanities Pub Date : 2023-07-20 DOI: 10.1177/02655322231186707
Tomohito Hiromori, H. Mohebbi
Agresti, A. (2013). Categorical data analysis (3rd ed.). Wiley. Dixon, P. (2008). Models of accuracy in repeated-measures designs. Journal of Memory and Language, 59(4), 447–456. https://doi.org/10.1016/j.jml.2007.11.004 Doran, H., Bates, D., Bliese, P., & Dowling, M. (2007). Estimating the multilevel Rasch model: With the lme4 package. Journal of Statistical Software, 20(2), 1–18. https://doi.org/10.18637/ jss.v020.i02 Embretson, S., & Gorin, J. (2001). Improving construct validity with cognitive psychology principles. Journal of Educational Measurement, 38(4), 343–368. https://doi.org/10. 1111/j.1745-3984.2001.tb01131.x Jiang, Z. (2018). Using the linear mixed-effect model framework to estimate generalizability variance components in R. Methodology, 14(3), 133–142. https://doi.org/10.1027/1614-2241/ a000149 Kidd, E., Donnelly, S., & Christiansen, M. H. (2018). Individual differences in language acquisition and processing. Trends in Cognitive Sciences, 22(2), 154–169. https://doi.org/10.1016/j. tics.2017.11.006
A.格雷斯蒂(2013)。分类数据分析(第三版)。威利。迪克森,P.(2008)。重复测量设计中的精度模型。语言与记忆学报,29(4),447-456。https://doi.org/10.1016/j.jml.2007.11.004 Doran, H., Bates, D., Bliese, P., & Dowling, M.(2007)。多层Rasch模型的估计:使用lme4包。统计软件学报,20(2),1-18。https://doi.org/10.18637/ jss.v020。[2]安伯森,S,和戈林,J.(2001)。运用认知心理学原理提高构念效度。教育测量学报,38(4),343-368。https://doi.org/10。1111 / j.1745-3984.2001.tb01131。蒋中。(2018)。利用线性混合效应模型框架估计广义性方差成分[j] .方法学,14(3),133-142。https://doi.org/10.1027/1614-2241/ a000149 Kidd, E., Donnelly, S., & Christiansen, m.h.(2018)。语言习得和加工的个体差异。认知科学趋势,22(2),154-169。https://doi.org/10.1016/j。tics.2017.11.006
{"title":"Book review: K. Sadeghi (Ed.), Technology-Assisted Language Assessment in Diverse Contexts: Lessons from the Transition to Online Testing During Covid-19","authors":"Tomohito Hiromori, H. Mohebbi","doi":"10.1177/02655322231186707","DOIUrl":"https://doi.org/10.1177/02655322231186707","url":null,"abstract":"Agresti, A. (2013). Categorical data analysis (3rd ed.). Wiley. Dixon, P. (2008). Models of accuracy in repeated-measures designs. Journal of Memory and Language, 59(4), 447–456. https://doi.org/10.1016/j.jml.2007.11.004 Doran, H., Bates, D., Bliese, P., & Dowling, M. (2007). Estimating the multilevel Rasch model: With the lme4 package. Journal of Statistical Software, 20(2), 1–18. https://doi.org/10.18637/ jss.v020.i02 Embretson, S., & Gorin, J. (2001). Improving construct validity with cognitive psychology principles. Journal of Educational Measurement, 38(4), 343–368. https://doi.org/10. 1111/j.1745-3984.2001.tb01131.x Jiang, Z. (2018). Using the linear mixed-effect model framework to estimate generalizability variance components in R. Methodology, 14(3), 133–142. https://doi.org/10.1027/1614-2241/ a000149 Kidd, E., Donnelly, S., & Christiansen, M. H. (2018). Individual differences in language acquisition and processing. Trends in Cognitive Sciences, 22(2), 154–169. https://doi.org/10.1016/j. tics.2017.11.006","PeriodicalId":17928,"journal":{"name":"Language Testing","volume":null,"pages":null},"PeriodicalIF":4.1,"publicationDate":"2023-07-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46160101","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Assessing speaking through multimodal oral presentations: The case of construct underrepresentation in EAP contexts 通过多模式口头陈述评估口语:EAP语境中结构代表性不足的案例
IF 4.1 1区 文学 Q1 Arts and Humanities Pub Date : 2023-07-07 DOI: 10.1177/02655322231183077
Louise Palmour
This article explores the nature of the construct underlying classroom-based English for academic purpose (EAP) oral presentation assessments, which are used, in part, to determine admission to programmes of study at UK universities. Through analysis of qualitative data (from questionnaires, interviews, rating discussions, and fieldnotes), the article highlights how, in EAP settings, there is a tendency for the rating criteria and EAP teacher assessors to sometimes focus too narrowly on particular spoken linguistic aspects of oral presentations. This is in spite of student assessees drawing on, and teacher assessors valuing, the multimodal communicative affordances available in oral presentation performances. To better avoid such construct underrepresentation, oral presentation tasks should be acknowledged and represented in rating scales, teacher assessor decision-making, and training in EAP contexts.
本文探讨了基于课堂的学术英语口头陈述评估的基本结构的性质,该评估在一定程度上用于确定英国大学学习课程的录取。通过分析定性数据(来自问卷、访谈、评分讨论和现场笔记),文章强调了在EAP环境中,评分标准和EAP教师评估员有时过于狭隘地关注口头陈述的特定口语方面。尽管学生评估员借鉴了口头陈述表演中的多模式交际可供性,教师评估员也很重视这种可供性。为了更好地避免这种结构代表性不足,口头陈述任务应该在评估量表、教师评估员决策和EAP背景下的培训中得到认可和代表。
{"title":"Assessing speaking through multimodal oral presentations: The case of construct underrepresentation in EAP contexts","authors":"Louise Palmour","doi":"10.1177/02655322231183077","DOIUrl":"https://doi.org/10.1177/02655322231183077","url":null,"abstract":"This article explores the nature of the construct underlying classroom-based English for academic purpose (EAP) oral presentation assessments, which are used, in part, to determine admission to programmes of study at UK universities. Through analysis of qualitative data (from questionnaires, interviews, rating discussions, and fieldnotes), the article highlights how, in EAP settings, there is a tendency for the rating criteria and EAP teacher assessors to sometimes focus too narrowly on particular spoken linguistic aspects of oral presentations. This is in spite of student assessees drawing on, and teacher assessors valuing, the multimodal communicative affordances available in oral presentation performances. To better avoid such construct underrepresentation, oral presentation tasks should be acknowledged and represented in rating scales, teacher assessor decision-making, and training in EAP contexts.","PeriodicalId":17928,"journal":{"name":"Language Testing","volume":null,"pages":null},"PeriodicalIF":4.1,"publicationDate":"2023-07-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49589857","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Development of the American Sign Language Fingerspelling and Numbers Comprehension Test (ASL FaN-CT) 美国手语手指拼写与数字理解测验(ASL FaN-CT)
IF 4.1 1区 文学 Q1 Arts and Humanities Pub Date : 2023-07-03 DOI: 10.1177/02655322231179494
C. Occhino, Ryan Lidster, Leah Geer, Jason D. Listman, P. Hauser
We describe the development and initial validation of the “ASL Fingerspelling and Number Comprehension Test” (ASL FaN-CT), a test of recognition proficiency for fingerspelled words in American Sign Language (ASL). Despite the relative frequency of fingerspelling in ASL discourse, learners commonly struggle to produce and perceive fingerspelling more than they do other facets of ASL. However, assessments of fingerspelling knowledge are highly underrepresented in the testing literature for signed languages. After first describing the construct, we describe test development, piloting, revisions, and evaluate the strength of the test’s validity argument vis-à-vis its intended interpretation and use as a screening instrument for current and future employees. The results of a pilot on 79 ASL learners provide strong evidence that the revised test is performing as intended and can be used to make accurate decisions about ASL learners’ proficiency in fingerspelling recognition. We conclude by describing the item properties observed in our current test, and our plans for continued validation and analysis with respect to a battery of tests of ASL proficiency currently in development.
我们描述了“ASL手指拼写和数字理解测试”(ASL-FaN-CT)的开发和初步验证,这是一项测试美国手语中手指拼写单词的识别能力的测试。尽管在美国手语话语中,指法拼写的频率相对较高,但与美国手语的其他方面相比,学习者通常更难产生和感知指法拼写。然而,在手语测试文献中,对指法拼写知识的评估代表性极低。在首先描述了结构之后,我们描述了测试的开发、试行、修订,并评估了测试的有效性论点相对于其预期解释的强度,并将其用作当前和未来员工的筛选工具。对79名美国手语学习者的试点结果提供了强有力的证据,证明修订后的测试按预期进行,并可用于对美国手语学习人员的手指拼写识别能力做出准确的决定。最后,我们描述了在当前测试中观察到的项目特性,以及我们对目前正在开发的一系列ASL熟练度测试的持续验证和分析计划。
{"title":"Development of the American Sign Language Fingerspelling and Numbers Comprehension Test (ASL FaN-CT)","authors":"C. Occhino, Ryan Lidster, Leah Geer, Jason D. Listman, P. Hauser","doi":"10.1177/02655322231179494","DOIUrl":"https://doi.org/10.1177/02655322231179494","url":null,"abstract":"We describe the development and initial validation of the “ASL Fingerspelling and Number Comprehension Test” (ASL FaN-CT), a test of recognition proficiency for fingerspelled words in American Sign Language (ASL). Despite the relative frequency of fingerspelling in ASL discourse, learners commonly struggle to produce and perceive fingerspelling more than they do other facets of ASL. However, assessments of fingerspelling knowledge are highly underrepresented in the testing literature for signed languages. After first describing the construct, we describe test development, piloting, revisions, and evaluate the strength of the test’s validity argument vis-à-vis its intended interpretation and use as a screening instrument for current and future employees. The results of a pilot on 79 ASL learners provide strong evidence that the revised test is performing as intended and can be used to make accurate decisions about ASL learners’ proficiency in fingerspelling recognition. We conclude by describing the item properties observed in our current test, and our plans for continued validation and analysis with respect to a battery of tests of ASL proficiency currently in development.","PeriodicalId":17928,"journal":{"name":"Language Testing","volume":null,"pages":null},"PeriodicalIF":4.1,"publicationDate":"2023-07-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49114525","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fairness of using different English accents: The effect of shared L1s in listening tasks of the Duolingo English test 使用不同英语口音的公平性:Duolingo英语测试听力任务中共享L1的效果
IF 4.1 1区 文学 Q1 Arts and Humanities Pub Date : 2023-07-03 DOI: 10.1177/02655322231179134
Okim Kang, Xun Yan, M. Kostromitina, Ron I. Thomson, T. Isaacs
This study aimed to answer an ongoing validity question related to the use of nonstandard English accents in international tests of English proficiency and associated issues of test fairness. More specifically, we examined (1) the extent to which different or shared English accents had an impact on listeners’ performances on the Duolingo listening tests and (2) the extent to which different English accents affected listeners’ performances on two different task types. Speakers from four interlanguage English accent varieties (Chinese, Spanish, Indian English [Hindi], and Korean) produced speech samples for “yes/no” vocabulary and dictation Duolingo listening tasks. Listeners who spoke with these same four English accents were then recruited to take the Duolingo listening test items. Results indicated that there is a shared first language (L1) benefit effect overall, with comparable test scores between shared-L1 and inner-circle L1 accents, and no significant differences in listeners’ listening performance scores across highly intelligible accent varieties. No task type effect was found. The findings provide guidance to better understand fairness, equality, and practicality of designing and administering high-stakes English tests targeting a diversity of accents.
本研究旨在回答在国际英语水平测试中使用非标准英语口音的效度问题以及相关的测试公平性问题。更具体地说,我们研究了(1)不同或相同的英语口音对听者在多邻国听力测试中的表现的影响程度;(2)不同英语口音对听者在两种不同任务类型中的表现的影响程度。来自四种英语中间语言口音(汉语、西班牙语、印度英语[印地语]和韩语)的使用者为“是/否”词汇和听写多邻国的听力任务提供了语音样本。然后,用这四种英语口音说话的听众被招募参加多邻国的听力测试项目。结果表明,总体上存在共享第一语言(L1)利益效应,共享L1和内圈L1口音之间的测试分数相当,并且在高可理解的口音变体中听者的听力表现得分没有显著差异。没有发现任务类型效应。这些发现为更好地理解针对不同口音设计和管理高风险英语考试的公平性、平等性和实用性提供了指导。
{"title":"Fairness of using different English accents: The effect of shared L1s in listening tasks of the Duolingo English test","authors":"Okim Kang, Xun Yan, M. Kostromitina, Ron I. Thomson, T. Isaacs","doi":"10.1177/02655322231179134","DOIUrl":"https://doi.org/10.1177/02655322231179134","url":null,"abstract":"This study aimed to answer an ongoing validity question related to the use of nonstandard English accents in international tests of English proficiency and associated issues of test fairness. More specifically, we examined (1) the extent to which different or shared English accents had an impact on listeners’ performances on the Duolingo listening tests and (2) the extent to which different English accents affected listeners’ performances on two different task types. Speakers from four interlanguage English accent varieties (Chinese, Spanish, Indian English [Hindi], and Korean) produced speech samples for “yes/no” vocabulary and dictation Duolingo listening tasks. Listeners who spoke with these same four English accents were then recruited to take the Duolingo listening test items. Results indicated that there is a shared first language (L1) benefit effect overall, with comparable test scores between shared-L1 and inner-circle L1 accents, and no significant differences in listeners’ listening performance scores across highly intelligible accent varieties. No task type effect was found. The findings provide guidance to better understand fairness, equality, and practicality of designing and administering high-stakes English tests targeting a diversity of accents.","PeriodicalId":17928,"journal":{"name":"Language Testing","volume":null,"pages":null},"PeriodicalIF":4.1,"publicationDate":"2023-07-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45816789","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
English learners who are blind or visually impaired: A participatory design approach to enhancing fairness and validity for language testing accommodations 盲人或视障英语学习者:提高语言测试公平性和有效性的参与式设计方法
IF 4.1 1区 文学 Q1 Arts and Humanities Pub Date : 2023-06-27 DOI: 10.1177/02655322231159143
Danielle Guzman-Orth, Jonathan Steinberg, Traci Albee
Standardizing accessible test design and development to meet students’ individual access needs is a complex task. The following study provides one approach to accessible test design and development using participatory design methods with school community members. Participatory research provides opportunities to empower collaborators by co-creating knowledge that is useful for assessment development. In this study, teachers of students who are visually impaired, students who are blind or are visually impaired, English language teachers, and test administrators provided feedback at critical stages of the development process to explore the construct validity of English language proficiency (ELP) assessments. Students who are blind or visually impared need to be able to show what they know and can do without impact from construct-irrelevant variance like language acquisition or disability characteristics. Building on our iterative accessible test design, development, and delivery practices, and as part of a large project on English-learner proficiency test accessibility and usability, we collected rich observation and interview data from 17 students who were blind or visually impaired and were enrolled in grades kindergarten through Grade 12. We examined the ratings and item metadata, including assistive technology preferences and interactions, while we used grounded theory approaches to examine qualitative thematic findings. Implications for research and practice are discussed.
标准化无障碍测试设计和开发以满足学生的个人访问需求是一项复杂的任务。下面的研究提供了一种使用参与式设计方法与学校社区成员进行无障碍测试设计和开发的方法。参与式研究通过共同创造对评估发展有用的知识,为合作者提供了授权的机会。在本研究中,视障学生的教师、失明或视障学生的教师、英语教师和考试管理者在开发过程的关键阶段提供反馈,以探讨英语语言能力(ELP)评估的构效度。盲人或视障学生需要能够在不受语言习得或残疾特征等构念无关差异影响的情况下展示他们所知道和能做的事情。基于我们迭代的无障碍测试设计、开发和交付实践,作为英语学习者水平测试无障碍和可用性大型项目的一部分,我们收集了来自17名幼儿园到12年级的盲人或视障学生的丰富观察和访谈数据。我们检查了评分和项目元数据,包括辅助技术偏好和交互,同时我们使用扎根理论方法来检查定性的主题发现。讨论了对研究和实践的启示。
{"title":"English learners who are blind or visually impaired: A participatory design approach to enhancing fairness and validity for language testing accommodations","authors":"Danielle Guzman-Orth, Jonathan Steinberg, Traci Albee","doi":"10.1177/02655322231159143","DOIUrl":"https://doi.org/10.1177/02655322231159143","url":null,"abstract":"Standardizing accessible test design and development to meet students’ individual access needs is a complex task. The following study provides one approach to accessible test design and development using participatory design methods with school community members. Participatory research provides opportunities to empower collaborators by co-creating knowledge that is useful for assessment development. In this study, teachers of students who are visually impaired, students who are blind or are visually impaired, English language teachers, and test administrators provided feedback at critical stages of the development process to explore the construct validity of English language proficiency (ELP) assessments. Students who are blind or visually impared need to be able to show what they know and can do without impact from construct-irrelevant variance like language acquisition or disability characteristics. Building on our iterative accessible test design, development, and delivery practices, and as part of a large project on English-learner proficiency test accessibility and usability, we collected rich observation and interview data from 17 students who were blind or visually impaired and were enrolled in grades kindergarten through Grade 12. We examined the ratings and item metadata, including assistive technology preferences and interactions, while we used grounded theory approaches to examine qualitative thematic findings. Implications for research and practice are discussed.","PeriodicalId":17928,"journal":{"name":"Language Testing","volume":null,"pages":null},"PeriodicalIF":4.1,"publicationDate":"2023-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43968301","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
期刊
Language Testing
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1