Pub Date : 2023-06-22DOI: 10.1177/02655322231179128
Beverly A. Baker, Angel Arias, Louis-David Bibeau, Yiwei (Coral) Qin, Margret Norenberg, Jennifer St-John
Placement tests are used to support a particular need in a local context—to determine the best starting place for a student entering a specific programme of language study. This brief report will focus on the development of an innovative placement test with self-directed elements for our local needs at a university in Canada for students studying English or French as a second language. Our goals are to produce a more efficient assessment instrument while allowing students more agency through the process. We hope that sharing these details will encourage others to consider the potential of incorporating self-directed elements in low-stakes placement decision-making.
{"title":"Rethinking student placement to enhance efficiency and student agency","authors":"Beverly A. Baker, Angel Arias, Louis-David Bibeau, Yiwei (Coral) Qin, Margret Norenberg, Jennifer St-John","doi":"10.1177/02655322231179128","DOIUrl":"https://doi.org/10.1177/02655322231179128","url":null,"abstract":"Placement tests are used to support a particular need in a local context—to determine the best starting place for a student entering a specific programme of language study. This brief report will focus on the development of an innovative placement test with self-directed elements for our local needs at a university in Canada for students studying English or French as a second language. Our goals are to produce a more efficient assessment instrument while allowing students more agency through the process. We hope that sharing these details will encourage others to consider the potential of incorporating self-directed elements in low-stakes placement decision-making.","PeriodicalId":17928,"journal":{"name":"Language Testing","volume":" ","pages":""},"PeriodicalIF":4.1,"publicationDate":"2023-06-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49249005","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-05-27DOI: 10.1177/02655322231168386
Laurene L. Christensen, Vitaliy V. Shyyan, Fabiana Macmillan
In order to make assessments as widely accessible as possible, including to young learners from diverse backgrounds with a wide range of individual needs and characteristics, some developers of standardized tests have resorted to offering accessibility tools (e.g., magnifying/zoom) and accommodations (e.g., extended response time) to test takers. These solutions have some limitations stemming from the fact that they are retroactively applied to tests. One possible avenue to meet the accessibility needs of diverse students is to proactively address accessibility in the development of test content for all learners. In this article, we report on the development and piloting of a systematic accessibility review process intended for all test content at an organization producing large-scale standardized English proficiency tests for elementary and secondary education students in the United States. We describe the theoretical and research foundations related to fairness, accessibility, and universal design that guided our development of the accessibility review process followed by the process used to develop the tool. The application of the accessibility checklist to a kindergarten English language proficiency assessment is described in detail. Finally, we share some considerations for how the accessibility checklist could be used in other assessment development contexts.
{"title":"Toward a systematic accessibility review process for English language proficiency tests for young learners","authors":"Laurene L. Christensen, Vitaliy V. Shyyan, Fabiana Macmillan","doi":"10.1177/02655322231168386","DOIUrl":"https://doi.org/10.1177/02655322231168386","url":null,"abstract":"In order to make assessments as widely accessible as possible, including to young learners from diverse backgrounds with a wide range of individual needs and characteristics, some developers of standardized tests have resorted to offering accessibility tools (e.g., magnifying/zoom) and accommodations (e.g., extended response time) to test takers. These solutions have some limitations stemming from the fact that they are retroactively applied to tests. One possible avenue to meet the accessibility needs of diverse students is to proactively address accessibility in the development of test content for all learners. In this article, we report on the development and piloting of a systematic accessibility review process intended for all test content at an organization producing large-scale standardized English proficiency tests for elementary and secondary education students in the United States. We describe the theoretical and research foundations related to fairness, accessibility, and universal design that guided our development of the accessibility review process followed by the process used to develop the tool. The application of the accessibility checklist to a kindergarten English language proficiency assessment is described in detail. Finally, we share some considerations for how the accessibility checklist could be used in other assessment development contexts.","PeriodicalId":17928,"journal":{"name":"Language Testing","volume":"40 1","pages":"856 - 876"},"PeriodicalIF":4.1,"publicationDate":"2023-05-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43828626","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-05-26DOI: 10.1177/02655322231162840
Xiaoting Shi, Xiaomei Ma, Wenbo Du, Xuliang Gao
Cognitive diagnostic assessment (CDA) intends to identify learners’ strengths and weaknesses in latent cognitive attributes to provide personalized remedial instructions. Previous CDA studies on English as a Foreign Language (EFL)/English as a Second Language (ESL) writing have adopted dichotomous cognitive diagnostic models (CDMs) to analyze data from checklists using simple yes/no judgments. Compared to descriptors with multiple levels, descriptors with only yes/no judgments were considered too absolute, potentially resulting in misjudgment of learners’ writing ability. However, few studies have used polytomous CDMs to analyze graded response data from rating scales to diagnose writing ability. This study applied polytomous CDMs to diagnose 1166 EFL learners’ writing performance scored with a three-level rating scale. The sG-DINA model was selected after comparing model-data fit statistics of multiple polytomous CDMs. The results of classification accuracy indices and item discrimination indices further demonstrated that sG-DINA had good performance on identifying learners’ strengths and weaknesses. The generated diagnostic information at group and individual levels was further synthesized into a personalized diagnostic report, although its usefulness still requires further investigation. The findings provided evidence for the feasibility of applying polytomous CDM in EFL writing assessment.
{"title":"Diagnosing Chinese EFL learners’ writing ability using polytomous cognitive diagnostic models","authors":"Xiaoting Shi, Xiaomei Ma, Wenbo Du, Xuliang Gao","doi":"10.1177/02655322231162840","DOIUrl":"https://doi.org/10.1177/02655322231162840","url":null,"abstract":"Cognitive diagnostic assessment (CDA) intends to identify learners’ strengths and weaknesses in latent cognitive attributes to provide personalized remedial instructions. Previous CDA studies on English as a Foreign Language (EFL)/English as a Second Language (ESL) writing have adopted dichotomous cognitive diagnostic models (CDMs) to analyze data from checklists using simple yes/no judgments. Compared to descriptors with multiple levels, descriptors with only yes/no judgments were considered too absolute, potentially resulting in misjudgment of learners’ writing ability. However, few studies have used polytomous CDMs to analyze graded response data from rating scales to diagnose writing ability. This study applied polytomous CDMs to diagnose 1166 EFL learners’ writing performance scored with a three-level rating scale. The sG-DINA model was selected after comparing model-data fit statistics of multiple polytomous CDMs. The results of classification accuracy indices and item discrimination indices further demonstrated that sG-DINA had good performance on identifying learners’ strengths and weaknesses. The generated diagnostic information at group and individual levels was further synthesized into a personalized diagnostic report, although its usefulness still requires further investigation. The findings provided evidence for the feasibility of applying polytomous CDM in EFL writing assessment.","PeriodicalId":17928,"journal":{"name":"Language Testing","volume":" ","pages":""},"PeriodicalIF":4.1,"publicationDate":"2023-05-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42780188","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-05-26DOI: 10.1177/02655322231169442
Robert A. Randez, C. Cornell
Promoting diversity, equity, and inclusion (DEI) has become a unifying cause within applied linguistics. Whether highlighting the experiences of linguistically diverse learners across the social class spectrum or advocating for the inclusion of marginalized populations in research, researchers within the subfields of applied linguistics have firmly taken up the DEI cause. However, one population is sparsely addressed, and this requires substantial attention. The disabled community includes a range of demographics with a variety of needs. In this article, we provide a historical commentary on the establishment of equity for the disabled community within the United States, applied linguistics, and the wider language testing field. We then offer a framework for advancing equity through a reflective process, along with two examples. The first focuses on the terminology used to reference this population and the ongoing process of respectful representation within published work. The second gives nuance by discussing testing accommodations for individuals with autism spectrum disorder to contextualize a subgroup of the disabled community within language testing. We hope that the information provided will encourage the language testing field to continue to consider the disabled community in the field’s efforts to advance equity through equitable assessment practices.
{"title":"Advancing equity in language assessment for learners with disabilities","authors":"Robert A. Randez, C. Cornell","doi":"10.1177/02655322231169442","DOIUrl":"https://doi.org/10.1177/02655322231169442","url":null,"abstract":"Promoting diversity, equity, and inclusion (DEI) has become a unifying cause within applied linguistics. Whether highlighting the experiences of linguistically diverse learners across the social class spectrum or advocating for the inclusion of marginalized populations in research, researchers within the subfields of applied linguistics have firmly taken up the DEI cause. However, one population is sparsely addressed, and this requires substantial attention. The disabled community includes a range of demographics with a variety of needs. In this article, we provide a historical commentary on the establishment of equity for the disabled community within the United States, applied linguistics, and the wider language testing field. We then offer a framework for advancing equity through a reflective process, along with two examples. The first focuses on the terminology used to reference this population and the ongoing process of respectful representation within published work. The second gives nuance by discussing testing accommodations for individuals with autism spectrum disorder to contextualize a subgroup of the disabled community within language testing. We hope that the information provided will encourage the language testing field to continue to consider the disabled community in the field’s efforts to advance equity through equitable assessment practices.","PeriodicalId":17928,"journal":{"name":"Language Testing","volume":"40 1","pages":"984 - 999"},"PeriodicalIF":4.1,"publicationDate":"2023-05-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47455283","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-05-16DOI: 10.1177/02655322231166587
J. Motteram, Richard Spiby, G. Bellhouse, Katarzyna Sroka
This article describes the implementation of a special accommodations policy for a suite of localised English language and numeracy tests, the Workplace Literacy and Numeracy (WPLN) Assessments. The WPLN are computer-delivered assessments, part of the WPLN training and assessment programme, which exists to provide access to workforce skills training across the Singaporean population. Operational and interview data were analysed to investigate three main areas: the extent to which WPLN accommodations are considered appropriate and effective for the test-taker population; the impact of the accommodations on test-takers’ future opportunities; and the main factors perceived by stakeholders as playing an important role in test accommodations. The findings indicate that while the accommodations provided are generally considered to be appropriate and effective for the diverse WPLN test-taker population and facilitate improved future educational and workplace opportunities, some areas of the process are problematic or worthy of further consideration. Specifically, recommendations are made for future improvement in special accommodations policy development, dissemination, and implementation in large-scale test systems to safeguard access and inclusion.
{"title":"Implementation of an accommodations policy for candidates with diverse needs in a large-scale testing system","authors":"J. Motteram, Richard Spiby, G. Bellhouse, Katarzyna Sroka","doi":"10.1177/02655322231166587","DOIUrl":"https://doi.org/10.1177/02655322231166587","url":null,"abstract":"This article describes the implementation of a special accommodations policy for a suite of localised English language and numeracy tests, the Workplace Literacy and Numeracy (WPLN) Assessments. The WPLN are computer-delivered assessments, part of the WPLN training and assessment programme, which exists to provide access to workforce skills training across the Singaporean population. Operational and interview data were analysed to investigate three main areas: the extent to which WPLN accommodations are considered appropriate and effective for the test-taker population; the impact of the accommodations on test-takers’ future opportunities; and the main factors perceived by stakeholders as playing an important role in test accommodations. The findings indicate that while the accommodations provided are generally considered to be appropriate and effective for the diverse WPLN test-taker population and facilitate improved future educational and workplace opportunities, some areas of the process are problematic or worthy of further consideration. Specifically, recommendations are made for future improvement in special accommodations policy development, dissemination, and implementation in large-scale test systems to safeguard access and inclusion.","PeriodicalId":17928,"journal":{"name":"Language Testing","volume":"40 1","pages":"904 - 932"},"PeriodicalIF":4.1,"publicationDate":"2023-05-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44420364","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-05-13DOI: 10.1177/02655322231167629
Ray J. T. Liao, Renka Ohta, K. Lee
As integrated writing tasks in large-scale and classroom-based writing assessments have risen in popularity, research studies have increasingly concentrated on providing validity evidence. Given the fact that most of these studies focus on adult second language learners rather than younger ones, this study examined the relationship between written discourse features, vocabulary support, and integrated listening-to-write scores for adolescent English learners. The participants of this study consisted of 198 Taiwanese high school students who completed two integrated listening-to-write tasks. Prior to each writing task, a list of key vocabulary was provided to aid the students’ comprehension of the listening passage. Their written products were coded and analyzed for measures of discourse features and vocabulary use, including complexity, accuracy, fluency, organization, vocabulary use ratio, and vocabulary use accuracy. We then adopted descriptive statistics and hierarchical linear regression analyses to investigate the extent to which such measures were predictive of integrated listening-to-write test scores. The results showed that fluency, organization, grammatical accuracy, and vocabulary use accuracy were significant predictors of the writing test scores. Moreover, the results revealed that providing vocabulary support may not necessarily jeopardize the validity of integrated listening-to-write tasks. The implications for research and test development were also discussed.
{"title":"The relationship between written discourse features and integrated listening-to-write scores for adolescent English language learners","authors":"Ray J. T. Liao, Renka Ohta, K. Lee","doi":"10.1177/02655322231167629","DOIUrl":"https://doi.org/10.1177/02655322231167629","url":null,"abstract":"As integrated writing tasks in large-scale and classroom-based writing assessments have risen in popularity, research studies have increasingly concentrated on providing validity evidence. Given the fact that most of these studies focus on adult second language learners rather than younger ones, this study examined the relationship between written discourse features, vocabulary support, and integrated listening-to-write scores for adolescent English learners. The participants of this study consisted of 198 Taiwanese high school students who completed two integrated listening-to-write tasks. Prior to each writing task, a list of key vocabulary was provided to aid the students’ comprehension of the listening passage. Their written products were coded and analyzed for measures of discourse features and vocabulary use, including complexity, accuracy, fluency, organization, vocabulary use ratio, and vocabulary use accuracy. We then adopted descriptive statistics and hierarchical linear regression analyses to investigate the extent to which such measures were predictive of integrated listening-to-write test scores. The results showed that fluency, organization, grammatical accuracy, and vocabulary use accuracy were significant predictors of the writing test scores. Moreover, the results revealed that providing vocabulary support may not necessarily jeopardize the validity of integrated listening-to-write tasks. The implications for research and test development were also discussed.","PeriodicalId":17928,"journal":{"name":"Language Testing","volume":" ","pages":""},"PeriodicalIF":4.1,"publicationDate":"2023-05-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45250297","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-05-09DOI: 10.1177/02655322231164565
Janna Fox
This edited volume provides a substantive review of learning-oriented assessment (LOA) research as it has been conceptualized in the literature and is currently playing out across diverse language teaching contexts. Although many definitions of LOA are discussed, in general they accord with Fulcher’s (Chapter 3) observation that LOA “is defined by the tasks that learners are asked to do, learner involvement in the process of doing and assessing the tasks, and the feedback provided to the learner on task performance” (p. 34). Throughout, the contributors acknowledge influential antecedents of LOA. They draw attention to the Assessment Reform Group (ARG), which reported on the negative consequences of large-scale assessment, argued for increased trust in teachers’ assessment, and reported that such formative assessment—undertaken on an ongoing basis by teachers and students for learning purposes—significantly improved overall school performance (cf. Black & Wiliam, 1998). Also from the general education literature, the contributors highlight Carless’ (2007) LOA framework and concomitant assessment principles and Pellegrino et al.’s (2001) vision of alignment as a “comprehensive, coherent, and continuous” (p. 9) system, seamlessly linking [macro-level] policy, curriculum, and large-scale external tests with [micro-level] classroom-based assessment through a collectively shared model of student learning. However, as several contributors note, a shared model of learning (which is required to maintain such alignment) has proved elusive. Within language assessment research, two other LOA frameworks are prominently featured: Jones and Saville’s (2016) systemic LOA Cycle, which extended Pellegrino et al.’s vision of alignment (see Saville, Chapter 2), and Turner and Purpura’s (2016) Working framework for LOA, which identified seven “interrelated dimensions” that, taken together, account for LOA’s “complex” and “multifaceted” nature (p. 262). Research reported in the volume is recurrently informed by these frameworks. 1164565 LTJ0010.1177/02655322231164565Language TestingBook Reviews book-review2023
{"title":"Book review: Learning-Oriented Language Assessment: Putting Theory into Practice","authors":"Janna Fox","doi":"10.1177/02655322231164565","DOIUrl":"https://doi.org/10.1177/02655322231164565","url":null,"abstract":"This edited volume provides a substantive review of learning-oriented assessment (LOA) research as it has been conceptualized in the literature and is currently playing out across diverse language teaching contexts. Although many definitions of LOA are discussed, in general they accord with Fulcher’s (Chapter 3) observation that LOA “is defined by the tasks that learners are asked to do, learner involvement in the process of doing and assessing the tasks, and the feedback provided to the learner on task performance” (p. 34). Throughout, the contributors acknowledge influential antecedents of LOA. They draw attention to the Assessment Reform Group (ARG), which reported on the negative consequences of large-scale assessment, argued for increased trust in teachers’ assessment, and reported that such formative assessment—undertaken on an ongoing basis by teachers and students for learning purposes—significantly improved overall school performance (cf. Black & Wiliam, 1998). Also from the general education literature, the contributors highlight Carless’ (2007) LOA framework and concomitant assessment principles and Pellegrino et al.’s (2001) vision of alignment as a “comprehensive, coherent, and continuous” (p. 9) system, seamlessly linking [macro-level] policy, curriculum, and large-scale external tests with [micro-level] classroom-based assessment through a collectively shared model of student learning. However, as several contributors note, a shared model of learning (which is required to maintain such alignment) has proved elusive. Within language assessment research, two other LOA frameworks are prominently featured: Jones and Saville’s (2016) systemic LOA Cycle, which extended Pellegrino et al.’s vision of alignment (see Saville, Chapter 2), and Turner and Purpura’s (2016) Working framework for LOA, which identified seven “interrelated dimensions” that, taken together, account for LOA’s “complex” and “multifaceted” nature (p. 262). Research reported in the volume is recurrently informed by these frameworks. 1164565 LTJ0010.1177/02655322231164565Language TestingBook Reviews book-review2023","PeriodicalId":17928,"journal":{"name":"Language Testing","volume":"40 1","pages":"1036 - 1039"},"PeriodicalIF":4.1,"publicationDate":"2023-05-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45926819","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-05-05DOI: 10.1177/02655322231172105
Nivja H. de Jong
{"title":"Book review: Routledge Handbook of Second Language Acquisition and Language Testing","authors":"Nivja H. de Jong","doi":"10.1177/02655322231172105","DOIUrl":"https://doi.org/10.1177/02655322231172105","url":null,"abstract":"","PeriodicalId":17928,"journal":{"name":"Language Testing","volume":"40 1","pages":"1040 - 1043"},"PeriodicalIF":4.1,"publicationDate":"2023-05-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42500974","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-04-29DOI: 10.1177/02655322231162838
Janina Kahn-Horwitz, Zahava Goldstein
In order to inform English foreign language (EFL) diagnostic assessment of literacy, this study examined the extent to which 175 first-language Hebrew-speaking EFL young learners from fifth to tenth grade exhibited differences in single-letter grapheme recognition, sub-word, and word reading, and rapid automatized naming (RAN) of letters and numbers. In addition, this cross-sectional quasi-experimental quantitative study examined correlations between the aforementioned literacy components and oral reading speed, spelling, vocabulary, syntax, and morphological awareness. There were no differences between the grades for single-letter grapheme recognition, and participants demonstrated incomplete automatic recognition for this task. Sub-word recognition improved across grades. However, the results highlighted a lack of mastery. Sub-word recognition correlated with word reading and spelling throughout. RAN speeded measures and oral reading speed correlated with sub-word, word recognition, and spelling in the older grades illustrating the presence of accuracy and speed components. Correlations across grades between literacy components and vocabulary, syntax, and morphological awareness provided support for theories explaining how knowledge of multiple layers of words contributes to literacy acquisition. These results comprising EFL diagnostic assessment can inform reading and spelling teaching and learning.
{"title":"English foreign language reading and spelling diagnostic assessments informing teaching and learning of young learners","authors":"Janina Kahn-Horwitz, Zahava Goldstein","doi":"10.1177/02655322231162838","DOIUrl":"https://doi.org/10.1177/02655322231162838","url":null,"abstract":"In order to inform English foreign language (EFL) diagnostic assessment of literacy, this study examined the extent to which 175 first-language Hebrew-speaking EFL young learners from fifth to tenth grade exhibited differences in single-letter grapheme recognition, sub-word, and word reading, and rapid automatized naming (RAN) of letters and numbers. In addition, this cross-sectional quasi-experimental quantitative study examined correlations between the aforementioned literacy components and oral reading speed, spelling, vocabulary, syntax, and morphological awareness. There were no differences between the grades for single-letter grapheme recognition, and participants demonstrated incomplete automatic recognition for this task. Sub-word recognition improved across grades. However, the results highlighted a lack of mastery. Sub-word recognition correlated with word reading and spelling throughout. RAN speeded measures and oral reading speed correlated with sub-word, word recognition, and spelling in the older grades illustrating the presence of accuracy and speed components. Correlations across grades between literacy components and vocabulary, syntax, and morphological awareness provided support for theories explaining how knowledge of multiple layers of words contributes to literacy acquisition. These results comprising EFL diagnostic assessment can inform reading and spelling teaching and learning.","PeriodicalId":17928,"journal":{"name":"Language Testing","volume":" ","pages":""},"PeriodicalIF":4.1,"publicationDate":"2023-04-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46550343","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-04-27DOI: 10.1177/02655322231163863
Dongil Shin
This paper addresses the intersection of testing and policy, situating test-driven impact and validation within the context of policy-led educational reform in Korea. I will briefly review the existing validation models. Then, arguing for an expansion of the conventional conceptualization of consequential validity research, I use Fairclough’s dialectic–relational approach in critical discourse analysis (CDA), positioned in critical and poststructuralist research tradition, to evaluate social realities, such as intended and actual impact of policy-led testing, I take, as an example, the context of the development of the National English Ability Test (NEAT) in Korea, which had been used as a means of implementing government policies. Combining Messick’s validity framework for consequential evidence, Bachman and Palmer’s argument-based approach to validation (assessment use argument, AUA), and Fairclough’s dialectic–relational approach, I will illustrate how the impact of policy-led testing is performed and interpreted as a sociopolitical and discursive phenomenon, constituted and enacted in and through “discourse.” By revisiting the previous Faircloughian research works on NEAT’s impact, I postulate that the discourses arguing for and against social impact acquire their meanings from dialectical standpoints.
{"title":"Critical discursive approaches to evaluating policy-driven testing: Social impact as a target for validation","authors":"Dongil Shin","doi":"10.1177/02655322231163863","DOIUrl":"https://doi.org/10.1177/02655322231163863","url":null,"abstract":"This paper addresses the intersection of testing and policy, situating test-driven impact and validation within the context of policy-led educational reform in Korea. I will briefly review the existing validation models. Then, arguing for an expansion of the conventional conceptualization of consequential validity research, I use Fairclough’s dialectic–relational approach in critical discourse analysis (CDA), positioned in critical and poststructuralist research tradition, to evaluate social realities, such as intended and actual impact of policy-led testing, I take, as an example, the context of the development of the National English Ability Test (NEAT) in Korea, which had been used as a means of implementing government policies. Combining Messick’s validity framework for consequential evidence, Bachman and Palmer’s argument-based approach to validation (assessment use argument, AUA), and Fairclough’s dialectic–relational approach, I will illustrate how the impact of policy-led testing is performed and interpreted as a sociopolitical and discursive phenomenon, constituted and enacted in and through “discourse.” By revisiting the previous Faircloughian research works on NEAT’s impact, I postulate that the discourses arguing for and against social impact acquire their meanings from dialectical standpoints.","PeriodicalId":17928,"journal":{"name":"Language Testing","volume":" ","pages":""},"PeriodicalIF":4.1,"publicationDate":"2023-04-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43739503","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}