首页 > 最新文献

Language Assessment Quarterly最新文献

英文 中文
Low Print Literacy and Its Representation in Research and Policy 低印刷识字率及其在研究和政策中的表现
IF 2.9 2区 文学 Q1 Arts and Humanities Pub Date : 2021-04-05 DOI: 10.1080/15434303.2021.1903471
B. Deygers, Martha Bigelow, Joseph Lo Bianco, Darshini Nadarajan, M. Tani
ABSTRACT This paper constitutes an edited transcript of two online panels, conducted with four scholars whose complementary expertise regarding print literacy and migration offers a thought-provoking and innovative window on the representation of print literacy in applied linguistic research and in migration policy. The panel members are experts on language policy, literacy, proficiency and human capital research. Together, they address a range of interrelated matters: the constructs of language proficiency and literacy (with significant implication for assessment), the idea of literacy as human capital or as a human right, the urgent need for policy literacy among applied linguists, and the responsibility of applied linguistics in the literacy debate.
本文是由四位学者进行的两个在线小组的编辑记录,他们在印刷识字和移民方面的互补专业知识为应用语言学研究和移民政策中印刷识字的表现提供了一个发人深省和创新的窗口。小组成员是语言政策、读写能力、熟练程度和人力资本研究方面的专家。他们一起讨论了一系列相互关联的问题:语言能力和读写能力的结构(对评估有重大影响),读写能力作为人力资本或人权的想法,应用语言学家对政策读写能力的迫切需要,以及应用语言学在读写能力辩论中的责任。
{"title":"Low Print Literacy and Its Representation in Research and Policy","authors":"B. Deygers, Martha Bigelow, Joseph Lo Bianco, Darshini Nadarajan, M. Tani","doi":"10.1080/15434303.2021.1903471","DOIUrl":"https://doi.org/10.1080/15434303.2021.1903471","url":null,"abstract":"ABSTRACT This paper constitutes an edited transcript of two online panels, conducted with four scholars whose complementary expertise regarding print literacy and migration offers a thought-provoking and innovative window on the representation of print literacy in applied linguistic research and in migration policy. The panel members are experts on language policy, literacy, proficiency and human capital research. Together, they address a range of interrelated matters: the constructs of language proficiency and literacy (with significant implication for assessment), the idea of literacy as human capital or as a human right, the urgent need for policy literacy among applied linguists, and the responsibility of applied linguistics in the literacy debate.","PeriodicalId":46873,"journal":{"name":"Language Assessment Quarterly","volume":null,"pages":null},"PeriodicalIF":2.9,"publicationDate":"2021-04-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/15434303.2021.1903471","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42008492","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
English Language Proficiency Testing in Asia: A New Paradigm Bridging Global and Local Contexts 亚洲的英语水平测试:连接全球和地方背景的新范式
IF 2.9 2区 文学 Q1 Arts and Humanities Pub Date : 2021-03-27 DOI: 10.1080/15434303.2021.1903469
Davy Tran, Becky H. Huang
{"title":"English Language Proficiency Testing in Asia: A New Paradigm Bridging Global and Local Contexts","authors":"Davy Tran, Becky H. Huang","doi":"10.1080/15434303.2021.1903469","DOIUrl":"https://doi.org/10.1080/15434303.2021.1903469","url":null,"abstract":"","PeriodicalId":46873,"journal":{"name":"Language Assessment Quarterly","volume":null,"pages":null},"PeriodicalIF":2.9,"publicationDate":"2021-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/15434303.2021.1903469","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41985053","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Investigating the Skills Involved in Reading Test Tasks through Expert Judgement and Verbal Protocol Analysis: Convergence and Divergence between the Two Methods 从专家判断和言语协议分析研究阅读测试任务的技巧:两种方法的异同
IF 2.9 2区 文学 Q1 Arts and Humanities Pub Date : 2021-03-23 DOI: 10.1080/15434303.2021.1881964
Xiaohua Liu, J. Read
ABSTRACT Expert judgement has been frequently employed with reading assessments to gauge the skills potentially measured by test tasks, for purposes such as construct validation or producing diagnostic information. Despite the critical role it plays in such endeavours, few studies have triangulated its results with other types of data such as reported test-taking processes. A lack of such triangulation may bring the validity of experts’ judgements into question and undermine the credibility of subsequent procedures that build on them. In light of this, this study compared two groups of language experts’ judgements on the content of two sets of reading test tasks with ten university students’ verbal reports on solving those tasks. It was found that convergence was achieved between the two information sources for about 53% of the test tasks on what they were mainly assessing. However, there was a bigger gap between them regarding the specific skills involved in each task. A careful examination of the discrepancies between the two sources revealed that they are attributable to a number of factors. This study highlights the need to cross-check the results of expert judgement with other data sources. Implications for future test development and research are also discussed.
摘要专家判断经常被用于阅读评估,以衡量测试任务可能衡量的技能,用于结构验证或产生诊断信息。尽管它在这些努力中发挥着关键作用,但很少有研究将其结果与其他类型的数据(如报告的考试过程)进行三角化。缺乏这种三角测量可能会使专家判断的有效性受到质疑,并损害建立在这些判断基础上的后续程序的可信度。有鉴于此,本研究将两组语言专家对两组阅读测试任务内容的判断与十名大学生对解决这些任务的口头报告进行了比较。研究发现,在他们主要评估的测试任务中,约53%的测试任务在两个信息源之间实现了趋同。然而,在每项任务所涉及的具体技能方面,他们之间的差距更大。对两个来源之间的差异进行仔细审查后发现,这些差异可归因于若干因素。这项研究强调了将专家判断结果与其他数据来源进行交叉核对的必要性。还讨论了对未来测试开发和研究的启示。
{"title":"Investigating the Skills Involved in Reading Test Tasks through Expert Judgement and Verbal Protocol Analysis: Convergence and Divergence between the Two Methods","authors":"Xiaohua Liu, J. Read","doi":"10.1080/15434303.2021.1881964","DOIUrl":"https://doi.org/10.1080/15434303.2021.1881964","url":null,"abstract":"ABSTRACT Expert judgement has been frequently employed with reading assessments to gauge the skills potentially measured by test tasks, for purposes such as construct validation or producing diagnostic information. Despite the critical role it plays in such endeavours, few studies have triangulated its results with other types of data such as reported test-taking processes. A lack of such triangulation may bring the validity of experts’ judgements into question and undermine the credibility of subsequent procedures that build on them. In light of this, this study compared two groups of language experts’ judgements on the content of two sets of reading test tasks with ten university students’ verbal reports on solving those tasks. It was found that convergence was achieved between the two information sources for about 53% of the test tasks on what they were mainly assessing. However, there was a bigger gap between them regarding the specific skills involved in each task. A careful examination of the discrepancies between the two sources revealed that they are attributable to a number of factors. This study highlights the need to cross-check the results of expert judgement with other data sources. Implications for future test development and research are also discussed.","PeriodicalId":46873,"journal":{"name":"Language Assessment Quarterly","volume":null,"pages":null},"PeriodicalIF":2.9,"publicationDate":"2021-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/15434303.2021.1881964","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49382043","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
A Life for English Language Education: An Interview with Oryang Kwon 英语教育的生命——访权
IF 2.9 2区 文学 Q1 Arts and Humanities Pub Date : 2021-03-15 DOI: 10.1080/15434303.2020.1859512
Oryang Kwon, Won-Key Lee
引言
引言
{"title":"A Life for English Language Education: An Interview with Oryang Kwon","authors":"Oryang Kwon, Won-Key Lee","doi":"10.1080/15434303.2020.1859512","DOIUrl":"https://doi.org/10.1080/15434303.2020.1859512","url":null,"abstract":"引言","PeriodicalId":46873,"journal":{"name":"Language Assessment Quarterly","volume":null,"pages":null},"PeriodicalIF":2.9,"publicationDate":"2021-03-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/15434303.2020.1859512","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42235256","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Is Frequency Enough?: The Frequency Model in Vocabulary Size Testing 频率足够吗?词汇量测试中的频率模型
IF 2.9 2区 文学 Q1 Arts and Humanities Pub Date : 2021-03-15 DOI: 10.1080/15434303.2020.1860058
Brett Hashimoto
ABSTRACT Modern vocabulary size tests are generally based on the notion that the more frequent a word is in a language, the more likely a learner will know that word. However, this assumption has been seldom questioned in the literature concerning vocabulary size tests. Using the Vocabulary of American-English Size Test (VAST) based on the Corpus of Contemporary American English (COCA), 403 English language learners were tested on a 10% systematic random sample of the first 5,000 most frequent words from that corpus. Pearson correlation between Rasch item difficulty (the probability that test-takers will know a word) and frequency was only r = 0.50 (r2 = 0.25). This moderate correlation indicates that the frequency of a word can only predict which words are known with only a limited degree of and that other factors are also affecting the order of acquisition of vocabulary. Additionally, using vocabulary levels/bands of 1,000 words as part of the structure of vocabulary size tests is shown to be questionable as well. These findings call into question the construct validity of modern vocabulary size tests. However, future confirmatory research is necessary to comprehensively determine the degree to which frequency of words and vocabulary size of learners are related.
现代词汇量测试通常基于这样一个概念:一个单词在一门语言中出现得越频繁,学习者就越有可能认识这个单词。然而,这一假设在有关词汇量测试的文献中很少受到质疑。使用基于当代美国英语语料库(COCA)的美式英语词汇量测试(VAST), 403名英语学习者接受了10%的系统随机抽样,从语料库中抽取前5000个最常见的单词。Rasch题难易度(考生认识一个单词的概率)与频率之间的Pearson相关性只有r = 0.50 (r2 = 0.25)。这种适度的相关性表明,一个单词的出现频率只能预测哪些单词在有限程度上是已知的,其他因素也影响着词汇习得的顺序。此外,使用1,000个单词的词汇水平/频带作为词汇量测试结构的一部分也被证明是有问题的。这些发现对现代词汇量测试的结构效度提出了质疑。然而,需要进一步的验证性研究来综合确定词汇频率与学习者词汇量的关系程度。
{"title":"Is Frequency Enough?: The Frequency Model in Vocabulary Size Testing","authors":"Brett Hashimoto","doi":"10.1080/15434303.2020.1860058","DOIUrl":"https://doi.org/10.1080/15434303.2020.1860058","url":null,"abstract":"ABSTRACT Modern vocabulary size tests are generally based on the notion that the more frequent a word is in a language, the more likely a learner will know that word. However, this assumption has been seldom questioned in the literature concerning vocabulary size tests. Using the Vocabulary of American-English Size Test (VAST) based on the Corpus of Contemporary American English (COCA), 403 English language learners were tested on a 10% systematic random sample of the first 5,000 most frequent words from that corpus. Pearson correlation between Rasch item difficulty (the probability that test-takers will know a word) and frequency was only r = 0.50 (r2 = 0.25). This moderate correlation indicates that the frequency of a word can only predict which words are known with only a limited degree of and that other factors are also affecting the order of acquisition of vocabulary. Additionally, using vocabulary levels/bands of 1,000 words as part of the structure of vocabulary size tests is shown to be questionable as well. These findings call into question the construct validity of modern vocabulary size tests. However, future confirmatory research is necessary to comprehensively determine the degree to which frequency of words and vocabulary size of learners are related.","PeriodicalId":46873,"journal":{"name":"Language Assessment Quarterly","volume":null,"pages":null},"PeriodicalIF":2.9,"publicationDate":"2021-03-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/15434303.2020.1860058","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45218924","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Evaluating Writing Process Features in an Adult EFL Writing Assessment Context: A Keystroke Logging Study 在成人英语写作评估环境中评估写作过程特征:一项按键记录研究
IF 2.9 2区 文学 Q1 Arts and Humanities Pub Date : 2021-03-15 DOI: 10.1080/15434303.2020.1804913
Ikkyu Choi, P. Deane
ABSTRACT Keystroke logs provide a comprehensive record of observable writing processes. Previous studies examining the keystroke logs of young L1 English writers performing experimental writing tasks have identified writing processes features predictive of the quality of responses. Contrarily, large-scale studies on the dynamic and temporal nature of L2 writing process are scarce, especially in an assessment setting. This study utilized the keystroke logs of adult English as a foreign language (EFL) learners responding to assessment tasks to examine the usefulness of the process features in this new context. We evaluated the features in terms of stability, explored factor structures for their correlations, and constructed models to predict response quality. The results showed that most of the process features were stable and that their correlations could be efficiently represented with a five-factor structure. Moreover, we observed improved response quality prediction over a baseline by up to 48%. These findings have implications for the evaluation and understanding of writing process features and for the substantive understanding of writing processes under assessment conditions.
击键日志提供了可观察的书写过程的全面记录。先前的研究检查了执行实验性写作任务的年轻L1英语作家的击键日志,发现了写作过程的特征,可以预测反应的质量。相反,关于二语写作过程的动态和时间性质的大规模研究很少,特别是在评估设置中。本研究利用成人英语作为外语(EFL)学习者对评估任务的键盘记录来检验过程特征在这种新背景下的有用性。我们评估了稳定性方面的特征,探索了它们之间相关性的因素结构,并构建了预测响应质量的模型。结果表明,大多数工艺特征是稳定的,它们之间的相关性可以用五因子结构有效地表示。此外,我们观察到响应质量预测在基线上提高了48%。这些发现对评估和理解写作过程特征以及在评估条件下对写作过程的实质性理解具有启示意义。
{"title":"Evaluating Writing Process Features in an Adult EFL Writing Assessment Context: A Keystroke Logging Study","authors":"Ikkyu Choi, P. Deane","doi":"10.1080/15434303.2020.1804913","DOIUrl":"https://doi.org/10.1080/15434303.2020.1804913","url":null,"abstract":"ABSTRACT Keystroke logs provide a comprehensive record of observable writing processes. Previous studies examining the keystroke logs of young L1 English writers performing experimental writing tasks have identified writing processes features predictive of the quality of responses. Contrarily, large-scale studies on the dynamic and temporal nature of L2 writing process are scarce, especially in an assessment setting. This study utilized the keystroke logs of adult English as a foreign language (EFL) learners responding to assessment tasks to examine the usefulness of the process features in this new context. We evaluated the features in terms of stability, explored factor structures for their correlations, and constructed models to predict response quality. The results showed that most of the process features were stable and that their correlations could be efficiently represented with a five-factor structure. Moreover, we observed improved response quality prediction over a baseline by up to 48%. These findings have implications for the evaluation and understanding of writing process features and for the substantive understanding of writing processes under assessment conditions.","PeriodicalId":46873,"journal":{"name":"Language Assessment Quarterly","volume":null,"pages":null},"PeriodicalIF":2.9,"publicationDate":"2021-03-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/15434303.2020.1804913","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42868108","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Local language testing: design, implementation, and development 本地语言测试:设计、实现和开发
IF 2.9 2区 文学 Q1 Arts and Humanities Pub Date : 2021-03-08 DOI: 10.1080/15434303.2021.1897594
Mutleb Alnafisah, S. Baghestani, Abdulrahman A. Alharthi
Throughout the language testing literature, there is a clear distinction between standardized tests, which are produced by testing companies and designed to be used across multiple institutions, and local tests, which are developed and used at a specific institution but are larger in scale than a classroom test. Local language tests are important because they can be tailored to meet the needs of the local instructional context in terms of which constructs and ability levels they assess. Nevertheless, stakeholders who are in a position to develop local language tests (e.g., language instructors and program or level coordinators) often lack formal assessment training. Local Language Testing: Design, Implementation, and Development addresses this concern by offering accessible, comprehensive guidance for non-testing experts (as well as more seasoned language testers) who are interested in developing, administering, and maintaining local language assessments at their institution. Each chapter of the book illustrates various types of constraints and challenges local language testers may face and offers solutions that can be exploited according to the available resources and expertise. In addition, one of the main objectives of this book is to draw readers’ attention to the educational benefits of local language tests. A vital characteristic of local tests which the authors emphasize throughout the book is their basis in the instructional context, a sufficient understanding of which should dictate and guide the development, administration, and maintenance of the test. For this reason, the authors bring their personal experiences with four different local tests to offer real examples and practical advice on how their local contexts shaped and affected how they approached the development of the tests. These four local contexts are namely, the Oral English Proficiency Test (OEPT) at Purdue University, the Test of Oral English Proficiency for Academic Staff (TOEPAS) at the University of Copenhagen, the English Placement Test (EPT) at the University of Illinois at Urbana-Champaign, and the Assessment of College English, International (Ace-IN) at Purdue University. The first three chapters cover foundational principles for local testing and highlight the features that differentiate it from standardized testing and classroom assessment. The first chapter is an introductory chapter, and its take-away message is the centrality of understanding the local context (i.e., the educational goals and values at a particular institution or program) for successfully developing a local test. The second chapter discusses different aspects of local instructional contexts that influence language test design, such as the status of English and preferred instructional approaches. Understanding these variations enables test developers to better define and operationalize test constructs, enhancing the quality of the assessments. The third chapter introduces the authors’ conceptua
纵观语言测试文献,标准化测试和地方测试之间有着明显的区别,标准化测试是由测试公司制作的,旨在跨多个机构使用,而地方测试是在特定机构开发和使用的,但规模比课堂测试大。当地语言测试很重要,因为它们可以根据评估的结构和能力水平进行调整,以满足当地教学环境的需要。然而,能够制定当地语言测试的利益攸关方(例如,语言教员和方案或水平协调员)往往缺乏正式的评估培训。本地语言测试:设计、实现和开发通过为对开发、管理和维护本地语言评估感兴趣的非测试专家(以及更有经验的语言测试人员)提供可访问的、全面的指导来解决这个问题。本书的每一章都说明了本地语言测试人员可能面临的各种类型的限制和挑战,并根据可用的资源和专业知识提供了可以利用的解决方案。此外,本书的主要目的之一是提请读者注意当地语言考试的教育效益。作者在整本书中强调的地方测试的一个重要特征是他们在教学背景下的基础,充分理解应该决定和指导测试的开发,管理和维护。出于这个原因,作者们带来了他们在四种不同的地方测试中的个人经验,提供了真实的例子和实用的建议,说明他们的地方背景如何塑造和影响了他们开发这些测试的方式。这四种本地环境分别是普渡大学的英语口语水平测试(OEPT)、哥本哈根大学的学术人员英语口语水平测试(TOEPAS)、伊利诺伊大学厄巴纳-香槟分校的英语分班测试(EPT)和普渡大学的大学英语国际评估(Ace-IN)。前三章涵盖了本地测试的基本原则,并强调了将其与标准化测试和课堂评估区分开来的特征。第一章是介绍性的一章,它传达的信息是理解当地环境(即,特定机构或项目的教育目标和价值观)对于成功开发当地考试的中心意义。第二章讨论了影响语言测试设计的当地教学环境的不同方面,如英语的地位和首选的教学方法。理解这些变化使测试开发人员能够更好地定义和操作测试结构,从而提高评估的质量。第三章将作者的概念模型引入到局部测试开发中,并阐述了该模型与其他测试开发模型的区别。在作者的模型中,局部测试开发中的活动是重叠和相互联系的,这意味着计划和设计的阶段在实现之前同时发生,并且在测试管理之后持续存在。他们的模型提供了一个潜在的更现实的测试设计过程,以帮助本地测试开发人员执行这项任务。
{"title":"Local language testing: design, implementation, and development","authors":"Mutleb Alnafisah, S. Baghestani, Abdulrahman A. Alharthi","doi":"10.1080/15434303.2021.1897594","DOIUrl":"https://doi.org/10.1080/15434303.2021.1897594","url":null,"abstract":"Throughout the language testing literature, there is a clear distinction between standardized tests, which are produced by testing companies and designed to be used across multiple institutions, and local tests, which are developed and used at a specific institution but are larger in scale than a classroom test. Local language tests are important because they can be tailored to meet the needs of the local instructional context in terms of which constructs and ability levels they assess. Nevertheless, stakeholders who are in a position to develop local language tests (e.g., language instructors and program or level coordinators) often lack formal assessment training. Local Language Testing: Design, Implementation, and Development addresses this concern by offering accessible, comprehensive guidance for non-testing experts (as well as more seasoned language testers) who are interested in developing, administering, and maintaining local language assessments at their institution. Each chapter of the book illustrates various types of constraints and challenges local language testers may face and offers solutions that can be exploited according to the available resources and expertise. In addition, one of the main objectives of this book is to draw readers’ attention to the educational benefits of local language tests. A vital characteristic of local tests which the authors emphasize throughout the book is their basis in the instructional context, a sufficient understanding of which should dictate and guide the development, administration, and maintenance of the test. For this reason, the authors bring their personal experiences with four different local tests to offer real examples and practical advice on how their local contexts shaped and affected how they approached the development of the tests. These four local contexts are namely, the Oral English Proficiency Test (OEPT) at Purdue University, the Test of Oral English Proficiency for Academic Staff (TOEPAS) at the University of Copenhagen, the English Placement Test (EPT) at the University of Illinois at Urbana-Champaign, and the Assessment of College English, International (Ace-IN) at Purdue University. The first three chapters cover foundational principles for local testing and highlight the features that differentiate it from standardized testing and classroom assessment. The first chapter is an introductory chapter, and its take-away message is the centrality of understanding the local context (i.e., the educational goals and values at a particular institution or program) for successfully developing a local test. The second chapter discusses different aspects of local instructional contexts that influence language test design, such as the status of English and preferred instructional approaches. Understanding these variations enables test developers to better define and operationalize test constructs, enhancing the quality of the assessments. The third chapter introduces the authors’ conceptua","PeriodicalId":46873,"journal":{"name":"Language Assessment Quarterly","volume":null,"pages":null},"PeriodicalIF":2.9,"publicationDate":"2021-03-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/15434303.2021.1897594","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44325949","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 17
Text Authenticity in Listening Assessment: Can Item Writers Be Trained to Produce Authentic-sounding Texts? 听力评估中的文本真实性:项目作者是否可以接受训练来制作听起来真实的文本?
IF 2.9 2区 文学 Q1 Arts and Humanities Pub Date : 2021-03-08 DOI: 10.1080/15434303.2021.1895162
Olena Rossi, Tineke Brunfaut
ABSTRACT A long-standing debate in the testing of listening concerns the authenticity of the listening input. On the one hand, listening texts produced by item writers often lack spoken language characteristics. On the other hand, real-life recordings are often too context-specific to stand alone, or not suitable for item generation. In this study, we explored the effectiveness of an existing item-writing training course to produce authentic-sounding listening texts within the constraints of test specifications. Twenty-five trainees took an online item-writing course including training on creating authentic-sounding listening texts. Prior to and after the course, they developed a listening task. The resulting listening texts were judged on authenticity by three professional item reviewers and analysed linguistically by the researchers. Additionally, we interviewed the trainees following each item writing event and analysed their online discussions from during the course. Statistical comparison of the pre-and post-course authenticity scores revealed a positive effect of the training on item-writers’ ability to produce authentic-sounding listening texts, while the linguistic analysis demonstrated that the texts produced after the training contained more instances of spoken language. The interviews and discussions revealed that item writers’ awareness of spoken language features and their text production techniques influenced their ability to develop authentic-sounding texts.
听力测试中一个长期存在的争论是关于听力输入的真实性。一方面,项目编写者创作的听力文本往往缺乏口语特征。另一方面,现实生活中的记录往往过于特定于上下文而不能单独存在,或者不适合用于项目生成。在这项研究中,我们探讨了现有的项目写作培训课程在测试规范的约束下产生听起来真实的听力文本的有效性。25名受训者参加了在线项目写作课程,其中包括编写听起来真实的听力文本的培训。在课程之前和之后,他们制定了一个听力任务。最终的听力文本由三位专业的项目评审员对真实性进行评判,并由研究人员进行语言分析。此外,我们在每个项目写作活动之后采访了学员,并分析了他们在课程期间的在线讨论。课程前和课程后真实性分数的统计比较表明,训练对项目作者创作听起来真实的听力文本的能力有积极影响,而语言学分析表明,训练后产生的文本包含更多的口语实例。访谈和讨论表明,项目编写者对口语特征的认识和他们的文本制作技术影响了他们开发听起来真实的文本的能力。
{"title":"Text Authenticity in Listening Assessment: Can Item Writers Be Trained to Produce Authentic-sounding Texts?","authors":"Olena Rossi, Tineke Brunfaut","doi":"10.1080/15434303.2021.1895162","DOIUrl":"https://doi.org/10.1080/15434303.2021.1895162","url":null,"abstract":"ABSTRACT A long-standing debate in the testing of listening concerns the authenticity of the listening input. On the one hand, listening texts produced by item writers often lack spoken language characteristics. On the other hand, real-life recordings are often too context-specific to stand alone, or not suitable for item generation. In this study, we explored the effectiveness of an existing item-writing training course to produce authentic-sounding listening texts within the constraints of test specifications. Twenty-five trainees took an online item-writing course including training on creating authentic-sounding listening texts. Prior to and after the course, they developed a listening task. The resulting listening texts were judged on authenticity by three professional item reviewers and analysed linguistically by the researchers. Additionally, we interviewed the trainees following each item writing event and analysed their online discussions from during the course. Statistical comparison of the pre-and post-course authenticity scores revealed a positive effect of the training on item-writers’ ability to produce authentic-sounding listening texts, while the linguistic analysis demonstrated that the texts produced after the training contained more instances of spoken language. The interviews and discussions revealed that item writers’ awareness of spoken language features and their text production techniques influenced their ability to develop authentic-sounding texts.","PeriodicalId":46873,"journal":{"name":"Language Assessment Quarterly","volume":null,"pages":null},"PeriodicalIF":2.9,"publicationDate":"2021-03-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/15434303.2021.1895162","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43208448","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Serendipitous: Lessons and Insights into Language Assessment from Catherine Elder 机缘巧合:凯瑟琳·埃尔德对语言评估的启示
IF 2.9 2区 文学 Q1 Arts and Humanities Pub Date : 2021-03-04 DOI: 10.1080/15434303.2020.1863967
Interviewed by R Roz Hirch
ABSTRACT The following interview was conducted with Catherine Elder in spring of 2020, at the beginning of the pandemic. Cathie has had a varied career in language testing, including work at universities in Australia and New Zealand and at the Language Testing Resource Center in Melbourne. In this interview, Cathie shares some highlights of her somewhat serendipitous career as well as lessons she has learned along the way and insights into possible future directions for language assessment.
摘要以下采访于2020年春季,也就是疫情爆发之初,对凯瑟琳·埃尔德进行了采访。Cathie在语言测试方面有着丰富多彩的职业生涯,包括在澳大利亚和新西兰的大学以及墨尔本的语言测试资源中心工作。在这次采访中,Cathie分享了她偶然职业生涯的一些亮点,以及她在这一过程中学到的教训,以及对语言评估未来可能方向的见解。
{"title":"Serendipitous: Lessons and Insights into Language Assessment from Catherine Elder","authors":"Interviewed by R Roz Hirch","doi":"10.1080/15434303.2020.1863967","DOIUrl":"https://doi.org/10.1080/15434303.2020.1863967","url":null,"abstract":"ABSTRACT The following interview was conducted with Catherine Elder in spring of 2020, at the beginning of the pandemic. Cathie has had a varied career in language testing, including work at universities in Australia and New Zealand and at the Language Testing Resource Center in Melbourne. In this interview, Cathie shares some highlights of her somewhat serendipitous career as well as lessons she has learned along the way and insights into possible future directions for language assessment.","PeriodicalId":46873,"journal":{"name":"Language Assessment Quarterly","volume":null,"pages":null},"PeriodicalIF":2.9,"publicationDate":"2021-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/15434303.2020.1863967","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43215116","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Testing Language, but What?: Examining the Carrier Content of IELTS Preparation Materials from a Critical Perspective 测试语言,但是什么?:从批判的角度审视雅思备考材料的载体内容
IF 2.9 2区 文学 Q1 Arts and Humanities Pub Date : 2021-02-07 DOI: 10.1080/15434303.2021.1883618
M. Noori, Seyyed-Abdolhamid Mirhosseini
ABSTRACT The implicit sociocultural functioning of the content of high-stakes English language proficiency tests is a rarely-explored concern in language assessment. This study attempts to bring critical views of language testing and critical discourse studies together to examine the content of IELTS preparation materials in search of topics that are reflected and reproduced through this content. Fourteen sample tests (including reading texts, transcripts of listening files, speaking cue-cards, and writing topics) were investigated through a qualitative content analysis process. The emerging 663 coded episodes came together in four major categories of topics that shape the overall content of these IELTS practice books: Entertainment, Money, Nature, and Education, plus a miscellaneous set of less prominent topics. The findings indicate the discursive accentuation of specific aspects of these themes as well as certain patterns of the inclusion/exclusion of settings and participants. We argue that the discursive construction of such a content landscape can shape specific sociocultural orientations, and can naturalize and reproduce mental models and values far from the universal face of an international high-stakes test.
摘要高风险英语语言能力测试内容的隐性社会文化功能是语言评估中很少被探讨的问题。本研究试图将语言测试和批判性话语研究的批判性观点结合起来,考察雅思备考材料的内容,寻找通过这些内容反映和再现的主题。通过定性内容分析过程调查了14个样本测试(包括阅读文本、听力文件文本、口语提示卡和写作主题)。新出现的663个编码章节集中在四大类主题中,这些主题塑造了雅思练习本的整体内容:娱乐,金钱,自然和教育,以及其他一些不太突出的主题。研究结果表明,话语强调了这些主题的特定方面,以及某些模式的包括/排除设置和参与者。我们认为,这种内容景观的话语建构可以塑造特定的社会文化取向,并可以自然化和再现远离国际高风险测试的普遍面孔的心理模型和价值观。
{"title":"Testing Language, but What?: Examining the Carrier Content of IELTS Preparation Materials from a Critical Perspective","authors":"M. Noori, Seyyed-Abdolhamid Mirhosseini","doi":"10.1080/15434303.2021.1883618","DOIUrl":"https://doi.org/10.1080/15434303.2021.1883618","url":null,"abstract":"ABSTRACT The implicit sociocultural functioning of the content of high-stakes English language proficiency tests is a rarely-explored concern in language assessment. This study attempts to bring critical views of language testing and critical discourse studies together to examine the content of IELTS preparation materials in search of topics that are reflected and reproduced through this content. Fourteen sample tests (including reading texts, transcripts of listening files, speaking cue-cards, and writing topics) were investigated through a qualitative content analysis process. The emerging 663 coded episodes came together in four major categories of topics that shape the overall content of these IELTS practice books: Entertainment, Money, Nature, and Education, plus a miscellaneous set of less prominent topics. The findings indicate the discursive accentuation of specific aspects of these themes as well as certain patterns of the inclusion/exclusion of settings and participants. We argue that the discursive construction of such a content landscape can shape specific sociocultural orientations, and can naturalize and reproduce mental models and values far from the universal face of an international high-stakes test.","PeriodicalId":46873,"journal":{"name":"Language Assessment Quarterly","volume":null,"pages":null},"PeriodicalIF":2.9,"publicationDate":"2021-02-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/15434303.2021.1883618","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43707062","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
期刊
Language Assessment Quarterly
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1