Pub Date : 2023-02-02DOI: 10.1177/02655322221147924
Kátia Monteiro, S. Crossley, Robert-Mihai Botarleanu, M. Dascalu
Lexical frequency benchmarks have been extensively used to investigate second language (L2) lexical sophistication, especially in language assessment studies. However, indices based on semantic co-occurrence, which may be a better representation of the experience language users have with lexical items, have not been sufficiently tested as benchmarks of lexical sophistication. To address this gap, we developed and tested indices based on semantic co-occurrence from two computational methods, namely, Latent Semantic Analysis and Word2Vec. The indices were developed from one L2 written corpus (i.e., EF Cambridge Open Language Database [EF-CAMDAT]) and one first language (L1) written corpus (i.e., Corpus of Contemporary American English [COCA] Magazine). Available L1 semantic context indices (i.e., Touchstone Applied Sciences Associates [TASA] indices) were also assessed. To validate the indices, they were used to predict L2 essay quality scores as judged by human raters. The models suggested that the semantic context indices developed from EF-CAMDAT and TASA, but not the COCA Magazine indices, explained unique variance in the presence of lexical sophistication measures. This study suggests that semantic context indices based on multi-level corpora, including L2 corpora, may provide a useful representation of the experience L2 writers have with input, which may assist with automatic scoring of L2 writing.
{"title":"L2 and L1 semantic context indices as automated measures of lexical sophistication","authors":"Kátia Monteiro, S. Crossley, Robert-Mihai Botarleanu, M. Dascalu","doi":"10.1177/02655322221147924","DOIUrl":"https://doi.org/10.1177/02655322221147924","url":null,"abstract":"Lexical frequency benchmarks have been extensively used to investigate second language (L2) lexical sophistication, especially in language assessment studies. However, indices based on semantic co-occurrence, which may be a better representation of the experience language users have with lexical items, have not been sufficiently tested as benchmarks of lexical sophistication. To address this gap, we developed and tested indices based on semantic co-occurrence from two computational methods, namely, Latent Semantic Analysis and Word2Vec. The indices were developed from one L2 written corpus (i.e., EF Cambridge Open Language Database [EF-CAMDAT]) and one first language (L1) written corpus (i.e., Corpus of Contemporary American English [COCA] Magazine). Available L1 semantic context indices (i.e., Touchstone Applied Sciences Associates [TASA] indices) were also assessed. To validate the indices, they were used to predict L2 essay quality scores as judged by human raters. The models suggested that the semantic context indices developed from EF-CAMDAT and TASA, but not the COCA Magazine indices, explained unique variance in the presence of lexical sophistication measures. This study suggests that semantic context indices based on multi-level corpora, including L2 corpora, may provide a useful representation of the experience L2 writers have with input, which may assist with automatic scoring of L2 writing.","PeriodicalId":17928,"journal":{"name":"Language Testing","volume":null,"pages":null},"PeriodicalIF":4.1,"publicationDate":"2023-02-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47590675","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-02-02DOI: 10.1177/02655322221149009
Ahyoung Alicia Kim, Meltem Yumsek, J. Kemp, Mark Chapman, H. Gary Cook
English learners (ELs) comprise approximately 10% of kindergarten to Grade 12 students in US public schools, with about 15% of ELs identified as having disabilities. English language proficiency (ELP) assessments must adhere to universal design principles and incorporate universal tools, designed to increase accessibility for all ELs, including those with disabilities. This two-phase mixed methods study examined the extent Grades 1–12 ELs with and without disabilities activated universal tools during an online ELP assessment: Color Overlay, Color Contrast, Help Tools, Line Guide, Highlighter, Magnifier, and Sticky Notes. In Phase 1, analyses were conducted on 1.25 million students’ test and telemetry data (record of keystrokes and clicks). Phase 2 involved interviewing 55 ELs after test administration. Findings show that ELs activated the Line Guide, Highlighter, and Magnifier more frequently than others. The tool activation rate was higher in listening and reading domains than in speaking and writing. A significantly higher percentage of ELs with disabilities activated the tools than ELs without disabilities, but effect sizes were small; interview findings further revealed students’ rationale for tool use. Results indicate differences in ELs’ activation of universal tools depending on their disability category and language domain, providing evidence for the usefulness of these tools.
{"title":"Universal tools activation in English language proficiency assessments: A comparison of Grades 1–12 English learners with and without disabilities","authors":"Ahyoung Alicia Kim, Meltem Yumsek, J. Kemp, Mark Chapman, H. Gary Cook","doi":"10.1177/02655322221149009","DOIUrl":"https://doi.org/10.1177/02655322221149009","url":null,"abstract":"English learners (ELs) comprise approximately 10% of kindergarten to Grade 12 students in US public schools, with about 15% of ELs identified as having disabilities. English language proficiency (ELP) assessments must adhere to universal design principles and incorporate universal tools, designed to increase accessibility for all ELs, including those with disabilities. This two-phase mixed methods study examined the extent Grades 1–12 ELs with and without disabilities activated universal tools during an online ELP assessment: Color Overlay, Color Contrast, Help Tools, Line Guide, Highlighter, Magnifier, and Sticky Notes. In Phase 1, analyses were conducted on 1.25 million students’ test and telemetry data (record of keystrokes and clicks). Phase 2 involved interviewing 55 ELs after test administration. Findings show that ELs activated the Line Guide, Highlighter, and Magnifier more frequently than others. The tool activation rate was higher in listening and reading domains than in speaking and writing. A significantly higher percentage of ELs with disabilities activated the tools than ELs without disabilities, but effect sizes were small; interview findings further revealed students’ rationale for tool use. Results indicate differences in ELs’ activation of universal tools depending on their disability category and language domain, providing evidence for the usefulness of these tools.","PeriodicalId":17928,"journal":{"name":"Language Testing","volume":null,"pages":null},"PeriodicalIF":4.1,"publicationDate":"2023-02-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45570526","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-12DOI: 10.1177/02655322221145643
Marcus Warnby, Hans Malmström, Kajsa Yang Hansen
The academic section of the Vocabulary Levels Test (VLT-Ac) and the Academic Vocabulary Test (AVT) both assess meaning-recognition knowledge of written receptive academic vocabulary, deemed central for engagement in academic activities. Depending on the purpose and context of the testing, either of the tests can be appropriate, but for research and pedagogical purposes, it is important to be able to compare scores achieved on the two tests between administrations and within similar contexts. Based on a sample of 385 upper secondary school students in university-preparatory programs (independent CEFR B2-level users of English), this study presents a comparison model by linking the VLT-Ac and the AVT using concurrent calibration procedures in Item Response Theory. The key outcome of the study is a score comparison table providing a means for approximate score comparisons. Additionally, the study showcases a viable and valid method of comparing vocabulary scores from an older test with those from a newer one.
{"title":"Linking scores from two written receptive English academic vocabulary tests—The VLT-Ac and the AVT","authors":"Marcus Warnby, Hans Malmström, Kajsa Yang Hansen","doi":"10.1177/02655322221145643","DOIUrl":"https://doi.org/10.1177/02655322221145643","url":null,"abstract":"The academic section of the Vocabulary Levels Test (VLT-Ac) and the Academic Vocabulary Test (AVT) both assess meaning-recognition knowledge of written receptive academic vocabulary, deemed central for engagement in academic activities. Depending on the purpose and context of the testing, either of the tests can be appropriate, but for research and pedagogical purposes, it is important to be able to compare scores achieved on the two tests between administrations and within similar contexts. Based on a sample of 385 upper secondary school students in university-preparatory programs (independent CEFR B2-level users of English), this study presents a comparison model by linking the VLT-Ac and the AVT using concurrent calibration procedures in Item Response Theory. The key outcome of the study is a score comparison table providing a means for approximate score comparisons. Additionally, the study showcases a viable and valid method of comparing vocabulary scores from an older test with those from a newer one.","PeriodicalId":17928,"journal":{"name":"Language Testing","volume":null,"pages":null},"PeriodicalIF":4.1,"publicationDate":"2023-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46502650","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-12DOI: 10.1177/02655322221139162
Daniel J. Olson
Measuring language dominance, broadly defined as the relative strength of each of a bilingual’s two languages, remains a crucial methodological issue in bilingualism research. While various methods have been proposed, the Bilingual Language Profile (BLP) has been one of the most widely used tools for measuring language dominance. While previous studies have begun to establish its validity, the BLP has yet to be systematically evaluated with respect to reliability. Addressing this methodological gap, the current study examines the reliability of the BLP, employing a test–retest methodology with a large (N = 248), varied sample of Spanish–English bilinguals. Analysis focuses on the test–retest reliability of the overall dominance score, the dominant and non-dominant global language scores, and the subcomponent scores. The results demonstrate that the language dominance score produced by the BLP shows “excellent” levels of test–retest reliability. In addition, while some differences were found between the reliability of global language scores for the dominant and non-dominant languages, and for the different subcomponent scores, all components of the BLP display strong reliability. Taken as a whole, this study provides evidence for the reliability of BLP as a measure of bilingual language dominance.
{"title":"Measuring bilingual language dominance: An examination of the reliability of the Bilingual Language Profile","authors":"Daniel J. Olson","doi":"10.1177/02655322221139162","DOIUrl":"https://doi.org/10.1177/02655322221139162","url":null,"abstract":"Measuring language dominance, broadly defined as the relative strength of each of a bilingual’s two languages, remains a crucial methodological issue in bilingualism research. While various methods have been proposed, the Bilingual Language Profile (BLP) has been one of the most widely used tools for measuring language dominance. While previous studies have begun to establish its validity, the BLP has yet to be systematically evaluated with respect to reliability. Addressing this methodological gap, the current study examines the reliability of the BLP, employing a test–retest methodology with a large (N = 248), varied sample of Spanish–English bilinguals. Analysis focuses on the test–retest reliability of the overall dominance score, the dominant and non-dominant global language scores, and the subcomponent scores. The results demonstrate that the language dominance score produced by the BLP shows “excellent” levels of test–retest reliability. In addition, while some differences were found between the reliability of global language scores for the dominant and non-dominant languages, and for the different subcomponent scores, all components of the BLP display strong reliability. Taken as a whole, this study provides evidence for the reliability of BLP as a measure of bilingual language dominance.","PeriodicalId":17928,"journal":{"name":"Language Testing","volume":null,"pages":null},"PeriodicalIF":4.1,"publicationDate":"2023-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45222587","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-04DOI: 10.1177/02655322221144788
Claudia Harsch
Aryadoust, V., Ng, L. Y., & Sayama, H. (2020). A comprehensive review of Rasch measurement in language assessment: Recommendations and guidelines for research. Language Testing, 38(1), 6–40. https://doi.org/10.1177/0265532220927487 Berrío, Á. I., Gómez-Benito, J., & Arias-Patiño, E. M. (2020). Developments and trends in research on methods of detecting differential item functioning. Educational Research Review, 31, Article 100340. https://doi.org/10.1016/j.edurev.2020.100340 Choi, Y.-J., & Asilkalkan, A. (2019). R packages for item response theory analysis: Descriptions and features. Measurement: Interdisciplinary Research and Perspectives, 17(3), 168–175. https://doi.org/10.1080/15366367.2019.1586404 Desjardins, C. D., & Bulut, O. (2018). Handbook of educational measurement and psychometrics using R. CRC Press. https://doi.org/10.1201/b20498 Linacre, J. M. (2022a). Facets computer program for many-facet Rasch measurement (Version 3.84.0). Winsteps. Linacre, J. M. (2022b). Winsteps® Rasch measurement computer program (Version 5.3.1). Winsteps. Luo, Y., & Jiao, H. (2017). Using the Stan program for Bayesian item response theory. Educational and Psychological Measurement, 78(3), 384–408. https://doi.org/10.1177/0013164417693666 Nicklin, C., & Vitta, J. P. (2022). Assessing Rasch measurement estimation methods across R packages with yes/no vocabulary test data. Language Testing, 39(4), 513–540. https://doi. org/10.1177/02655322211066822 Yildiz, H. (2021). IrtGUI: Item response theory analysis with a graphic user interface (R Package Version 0.2). https://CRAN.R-project.org/package=irtGUI
{"title":"Book Review: Reflecting on the Common European Framework of Reference for Languages and its companion volume","authors":"Claudia Harsch","doi":"10.1177/02655322221144788","DOIUrl":"https://doi.org/10.1177/02655322221144788","url":null,"abstract":"Aryadoust, V., Ng, L. Y., & Sayama, H. (2020). A comprehensive review of Rasch measurement in language assessment: Recommendations and guidelines for research. Language Testing, 38(1), 6–40. https://doi.org/10.1177/0265532220927487 Berrío, Á. I., Gómez-Benito, J., & Arias-Patiño, E. M. (2020). Developments and trends in research on methods of detecting differential item functioning. Educational Research Review, 31, Article 100340. https://doi.org/10.1016/j.edurev.2020.100340 Choi, Y.-J., & Asilkalkan, A. (2019). R packages for item response theory analysis: Descriptions and features. Measurement: Interdisciplinary Research and Perspectives, 17(3), 168–175. https://doi.org/10.1080/15366367.2019.1586404 Desjardins, C. D., & Bulut, O. (2018). Handbook of educational measurement and psychometrics using R. CRC Press. https://doi.org/10.1201/b20498 Linacre, J. M. (2022a). Facets computer program for many-facet Rasch measurement (Version 3.84.0). Winsteps. Linacre, J. M. (2022b). Winsteps® Rasch measurement computer program (Version 5.3.1). Winsteps. Luo, Y., & Jiao, H. (2017). Using the Stan program for Bayesian item response theory. Educational and Psychological Measurement, 78(3), 384–408. https://doi.org/10.1177/0013164417693666 Nicklin, C., & Vitta, J. P. (2022). Assessing Rasch measurement estimation methods across R packages with yes/no vocabulary test data. Language Testing, 39(4), 513–540. https://doi. org/10.1177/02655322211066822 Yildiz, H. (2021). IrtGUI: Item response theory analysis with a graphic user interface (R Package Version 0.2). https://CRAN.R-project.org/package=irtGUI","PeriodicalId":17928,"journal":{"name":"Language Testing","volume":null,"pages":null},"PeriodicalIF":4.1,"publicationDate":"2023-01-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48666199","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-04DOI: 10.1177/02655322221137869
H. Nishizawa
In this study, I investigate the construct validity and fairness pertaining to the use of a variety of Englishes in listening test input. I obtained data from a post-entry English language placement test administered at a public university in the United States. In addition to expectedly familiar American English, the test features Hawai’i, Filipino, and Indian English, which are expectedly less familiar to our test takers, but justified by the context. I used confirmatory factor analysis to test whether the category of unfamiliar English items formed a latent factor distinct from the other category of more familiar American English items. I used Rasch-based differential item functioning analysis to examine item biases as a function of examinees’ place of origin. The results from the confirmatory factor analysis suggested that the unfamiliar English items tapped into the same underlying construct as the familiar English items. The Rasch-based differential item functioning analysis revealed many instances of item bias among unfamiliar English items with higher proportions of item biases for items targeting narrow comprehension than broad comprehension. However, at the test level, the unfamiliar English items did not substantially influence raw total scores. These findings offer support for using a variety of Englishes in listening tests.
{"title":"Construct validity and fairness of an operational listening test with World Englishes","authors":"H. Nishizawa","doi":"10.1177/02655322221137869","DOIUrl":"https://doi.org/10.1177/02655322221137869","url":null,"abstract":"In this study, I investigate the construct validity and fairness pertaining to the use of a variety of Englishes in listening test input. I obtained data from a post-entry English language placement test administered at a public university in the United States. In addition to expectedly familiar American English, the test features Hawai’i, Filipino, and Indian English, which are expectedly less familiar to our test takers, but justified by the context. I used confirmatory factor analysis to test whether the category of unfamiliar English items formed a latent factor distinct from the other category of more familiar American English items. I used Rasch-based differential item functioning analysis to examine item biases as a function of examinees’ place of origin. The results from the confirmatory factor analysis suggested that the unfamiliar English items tapped into the same underlying construct as the familiar English items. The Rasch-based differential item functioning analysis revealed many instances of item bias among unfamiliar English items with higher proportions of item biases for items targeting narrow comprehension than broad comprehension. However, at the test level, the unfamiliar English items did not substantially influence raw total scores. These findings offer support for using a variety of Englishes in listening tests.","PeriodicalId":17928,"journal":{"name":"Language Testing","volume":null,"pages":null},"PeriodicalIF":4.1,"publicationDate":"2023-01-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47354788","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01DOI: 10.1177/02655322221126606
Soo Jung Youn
As access to smartphones and emerging technologies has become ubiquitous in our daily lives and in language learning, technology-mediated social interaction has become common in teaching and assessing L2 speaking. The changing ecology of L2 spoken interaction provides language educators and testers with opportunities for renewed test design and the gathering of context-sensitive validity evidence of interactive speaking assessment. First, I review the current research on interactive speaking assessment focusing on commonly used test formats and types of validity evidence. Second, I discuss recent research that reports the use of artificial intelligence and technologies in teaching and assessing speaking in order to understand how and what evidence of interactive speaking is elicited. Based on the discussion, I argue that it is critical to identify what features of interactive speaking are elicited depending on the types of technology-mediated interaction for valid assessment decisions in relation to intended uses. I further discuss opportunities and challenges for future research on test design and eliciting validity evidence of interactive speaking using technology-mediated interaction.
{"title":"Test design and validity evidence of interactive speaking assessment in the era of emerging technologies","authors":"Soo Jung Youn","doi":"10.1177/02655322221126606","DOIUrl":"https://doi.org/10.1177/02655322221126606","url":null,"abstract":"As access to smartphones and emerging technologies has become ubiquitous in our daily lives and in language learning, technology-mediated social interaction has become common in teaching and assessing L2 speaking. The changing ecology of L2 spoken interaction provides language educators and testers with opportunities for renewed test design and the gathering of context-sensitive validity evidence of interactive speaking assessment. First, I review the current research on interactive speaking assessment focusing on commonly used test formats and types of validity evidence. Second, I discuss recent research that reports the use of artificial intelligence and technologies in teaching and assessing speaking in order to understand how and what evidence of interactive speaking is elicited. Based on the discussion, I argue that it is critical to identify what features of interactive speaking are elicited depending on the types of technology-mediated interaction for valid assessment decisions in relation to intended uses. I further discuss opportunities and challenges for future research on test design and eliciting validity evidence of interactive speaking using technology-mediated interaction.","PeriodicalId":17928,"journal":{"name":"Language Testing","volume":null,"pages":null},"PeriodicalIF":4.1,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44244212","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01DOI: 10.1177/02655322221125204
Vahid Aryadoust
Construct validity and building validity arguments are some of the main challenges facing the language assessment community. The notion of construct validity and validity arguments arose from research in psychological assessment and developed into the gold standard of validation/validity research in language assessment. At a theoretical level, construct validity and validity arguments conflate the scientific reasoning in assessment and policy matters of ethics. Thus, a test validator is expected to simultaneously serve the role of conducting scientific research and examining the consequential basis of assessments. I contend that validity investigations should be decoupled from the ethical and social aspects of assessment. In addition, the near-exclusive focus of empirical construct validity research on cognitive processing has not resulted in sufficient accuracy and replicability in predicting test takers’ performance in real language use domains. Accordingly, I underscore the significance of prediction in validation, in contrast to explanation, and propose that the question to ask might not so much be about what a test measures as what type of methods and tools can better generate language use profiles. Finally, I suggest that interdisciplinary alliances with cognitive and computational neuroscience and artificial intelligence (AI) fields should be forged to meet the demands of language assessment in the 21st century.
{"title":"The vexing problem of validity and the future of second language assessment","authors":"Vahid Aryadoust","doi":"10.1177/02655322221125204","DOIUrl":"https://doi.org/10.1177/02655322221125204","url":null,"abstract":"Construct validity and building validity arguments are some of the main challenges facing the language assessment community. The notion of construct validity and validity arguments arose from research in psychological assessment and developed into the gold standard of validation/validity research in language assessment. At a theoretical level, construct validity and validity arguments conflate the scientific reasoning in assessment and policy matters of ethics. Thus, a test validator is expected to simultaneously serve the role of conducting scientific research and examining the consequential basis of assessments. I contend that validity investigations should be decoupled from the ethical and social aspects of assessment. In addition, the near-exclusive focus of empirical construct validity research on cognitive processing has not resulted in sufficient accuracy and replicability in predicting test takers’ performance in real language use domains. Accordingly, I underscore the significance of prediction in validation, in contrast to explanation, and propose that the question to ask might not so much be about what a test measures as what type of methods and tools can better generate language use profiles. Finally, I suggest that interdisciplinary alliances with cognitive and computational neuroscience and artificial intelligence (AI) fields should be forged to meet the demands of language assessment in the 21st century.","PeriodicalId":17928,"journal":{"name":"Language Testing","volume":null,"pages":null},"PeriodicalIF":4.1,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48647839","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01DOI: 10.1177/02655322221136802
Paula M. Winke
{"title":"Forty years of Language Testing, and the changing paths of publishing","authors":"Paula M. Winke","doi":"10.1177/02655322221136802","DOIUrl":"https://doi.org/10.1177/02655322221136802","url":null,"abstract":"","PeriodicalId":17928,"journal":{"name":"Language Testing","volume":null,"pages":null},"PeriodicalIF":4.1,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46083663","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Epilogue—Note from an outgoing editor","authors":"L. Harding","doi":"10.1177/02655322221138339","DOIUrl":"https://doi.org/10.1177/02655322221138339","url":null,"abstract":"In this brief epilogue, outgoing editor Luke Harding reflects on his time as editor and considers the future Language Testing.","PeriodicalId":17928,"journal":{"name":"Language Testing","volume":null,"pages":null},"PeriodicalIF":4.1,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45426305","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}