Pub Date : 2023-01-01DOI: 10.1177/02655322221127421
L. Taylor
As applied linguists and language testers, we are in the business of “doing language”. For many of us, language learning is a lifelong passion, and we invest similar enthusiasm in our language assessment research and testing practices. Language is also the vehicle through which we communicate that enthusiasm to others, sharing our knowledge and experience with colleagues so we can all grow in understanding and expertise. We are actually quite good at communicating within our own community. But when it comes to interacting with people beyond our own field, are we such effective communicators? Wider society—politicians, journalists, policymakers, social commentators, teachers, and parents—all seem to find assessment matters hard to grasp. And I am not sure we as language testers do much to help them. So I find myself wondering why that is? Is it that our language is too specialised, or overly technical? Do we choose unhelpful words or images when we talk about testing? Worse still, do we sometimes come across as rather arrogant or patronising, perhaps even irrelevant to non-specialists’ needs and concerns? If so, could we perhaps consider reframing our discourse and rhetoric in future to improve our communicative effectiveness, and how might we do that?
{"title":"Reframing the discourse and rhetoric of language testing and assessment for the public square","authors":"L. Taylor","doi":"10.1177/02655322221127421","DOIUrl":"https://doi.org/10.1177/02655322221127421","url":null,"abstract":"As applied linguists and language testers, we are in the business of “doing language”. For many of us, language learning is a lifelong passion, and we invest similar enthusiasm in our language assessment research and testing practices. Language is also the vehicle through which we communicate that enthusiasm to others, sharing our knowledge and experience with colleagues so we can all grow in understanding and expertise. We are actually quite good at communicating within our own community. But when it comes to interacting with people beyond our own field, are we such effective communicators? Wider society—politicians, journalists, policymakers, social commentators, teachers, and parents—all seem to find assessment matters hard to grasp. And I am not sure we as language testers do much to help them. So I find myself wondering why that is? Is it that our language is too specialised, or overly technical? Do we choose unhelpful words or images when we talk about testing? Worse still, do we sometimes come across as rather arrogant or patronising, perhaps even irrelevant to non-specialists’ needs and concerns? If so, could we perhaps consider reframing our discourse and rhetoric in future to improve our communicative effectiveness, and how might we do that?","PeriodicalId":17928,"journal":{"name":"Language Testing","volume":"40 1","pages":"47 - 53"},"PeriodicalIF":4.1,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47000811","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01DOI: 10.1177/02655322221126607
J. Burton
In its 40th year, Language Testing journal has served as the flagship journal for scholars, researchers, and practitioners in the field of language testing and assessment. This viewpoint piece, written from the perspective of an emerging scholar, discusses two possible future trends based on evidence going back to the very first issue of this journal. First, this paper outlines past efforts to describe and define the construct of second language communication, noting that much work has yet to be done for a more complete description in terms of interactional competence and nonverbal behavior. The second trend highlights the growing movement in applied linguistics toward research transparency through Open Science practices, including replication studies, the sharing of data and materials, and preregistration. This paper outlines work to date in Language Testing that encourages open practices and emphasizes the importance of these practices in assessment research.
{"title":"Reflections on the past and future of language testing and assessment: An emerging scholar’s perspective","authors":"J. Burton","doi":"10.1177/02655322221126607","DOIUrl":"https://doi.org/10.1177/02655322221126607","url":null,"abstract":"In its 40th year, Language Testing journal has served as the flagship journal for scholars, researchers, and practitioners in the field of language testing and assessment. This viewpoint piece, written from the perspective of an emerging scholar, discusses two possible future trends based on evidence going back to the very first issue of this journal. First, this paper outlines past efforts to describe and define the construct of second language communication, noting that much work has yet to be done for a more complete description in terms of interactional competence and nonverbal behavior. The second trend highlights the growing movement in applied linguistics toward research transparency through Open Science practices, including replication studies, the sharing of data and materials, and preregistration. This paper outlines work to date in Language Testing that encourages open practices and emphasizes the importance of these practices in assessment research.","PeriodicalId":17928,"journal":{"name":"Language Testing","volume":"40 1","pages":"24 - 30"},"PeriodicalIF":4.1,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47094561","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-12-27DOI: 10.1177/02655322221134218
Salomé Villa Larenas, Tineke Brunfaut
Research has shown that language teachers typically feel underprepared for assessment aspects of their job. One reason may relate to how teacher education programmes prepare future teachers in this area. Research insights into how and to what extent teacher educators train future language teachers in language assessment matters are scarce, however, as are insights into the language assessment literacy (LAL) of the teacher educators themselves. Additionally, while increasingly research insights are available on components that constitute LAL, how such components interrelate is largely unexplored. To help address these research gaps, we investigated the LAL of English as a Foreign Language teacher educators in Chile. Through interviews with 20 teacher educators and analysis of their language assessment materials, five LAL components were identified (language assessment knowledge, conceptions, context, practices, and learning), and two by-products of LAL (language assessor identity and self-efficacy). The components were found to interrelate in a complex manner, which we visualized with a model of concentric oval shapes, depicting how LAL is socially constructed (and re-constructed) from and for the specific context in which teacher educators’ practices are immersed. We discuss implications for LAL conceptualisations and for LAL research methodology.
{"title":"But who trains the language teacher educator who trains the language teacher? An empirical investigation of Chilean EFL teacher educators’ language assessment literacy","authors":"Salomé Villa Larenas, Tineke Brunfaut","doi":"10.1177/02655322221134218","DOIUrl":"https://doi.org/10.1177/02655322221134218","url":null,"abstract":"Research has shown that language teachers typically feel underprepared for assessment aspects of their job. One reason may relate to how teacher education programmes prepare future teachers in this area. Research insights into how and to what extent teacher educators train future language teachers in language assessment matters are scarce, however, as are insights into the language assessment literacy (LAL) of the teacher educators themselves. Additionally, while increasingly research insights are available on components that constitute LAL, how such components interrelate is largely unexplored. To help address these research gaps, we investigated the LAL of English as a Foreign Language teacher educators in Chile. Through interviews with 20 teacher educators and analysis of their language assessment materials, five LAL components were identified (language assessment knowledge, conceptions, context, practices, and learning), and two by-products of LAL (language assessor identity and self-efficacy). The components were found to interrelate in a complex manner, which we visualized with a model of concentric oval shapes, depicting how LAL is socially constructed (and re-constructed) from and for the specific context in which teacher educators’ practices are immersed. We discuss implications for LAL conceptualisations and for LAL research methodology.","PeriodicalId":17928,"journal":{"name":"Language Testing","volume":"40 1","pages":"463 - 492"},"PeriodicalIF":4.1,"publicationDate":"2022-12-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46849230","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-12-16DOI: 10.1177/02655322221140012
Zhiqing Lin, Huilin Chen
{"title":"Book Review: An Introduction to the Rasch Model with Examples in R","authors":"Zhiqing Lin, Huilin Chen","doi":"10.1177/02655322221140012","DOIUrl":"https://doi.org/10.1177/02655322221140012","url":null,"abstract":"","PeriodicalId":17928,"journal":{"name":"Language Testing","volume":"40 1","pages":"450 - 453"},"PeriodicalIF":4.1,"publicationDate":"2022-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46270092","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-12-16DOI: 10.1177/02655322221140331
J. Norris
The second edition of The Routledge Handbook of Language Testing, published in 2022, is a hefty volume, covering a broad swath of theory, research, and practice in language testing over some 600 + pages. Editors Glenn Fulcher and Luke Harding have done a nice job of updating the first edition, bringing in a handful of new contributions for a total of 36 chapters, re-arranging the organization somewhat to collocate topics thematically, and encouraging revisions to nearly all of the included chapters. Compiling edited volumes, never mind substantial handbooks that are intended to reflect the entire field, like this one, is never an easy or straightforward endeavor. Choices inevitably must be made about which experts to invite, what topics to include and which ones to leave out, and how to arrange the contents and situate the contributions against the backdrop of an active and evolving domain of research and practice. On the whole, this book does a good job of reflecting a lot of what is on the minds of language testing researchers and practitioners as they go about the scholarship and business of language assessment, and it does so in a reader-friendly way, with relatively brief and consistently organized chapters produced by an impressive group of experts. I believe these characteristics recommend the book for use in seminars on language testing and as an authoritative reference for a variety of language testing stakeholders—indeed, many of these chapters will help in the cause of advancing language assessment literacy in multiple sectors (if we can only encourage their being read by individuals in those sectors . . .). In the following, I highlight a few dimensions of the volume that I find particularly useful and/or insightful, and I offer some observations on aspects that might have deserved more attention or perhaps should merit attention in the next edition. The book is arranged in 10 topical sections with three to five chapters each, fronted by a brief editorial introduction and ending with a subject and author index. In the introduction, the editors do a nice job of rationalizing the different sections of the book and introducing the key contributions of the distinct chapters. They also effectively link core ideas and themes that transcend individual chapters, thereby helping readers to notice important threads that connect the different perspectives and issues covered. Dispensing with one production quibble up front, the Index is not well compiled. While no doubt a challenge with so many contributing authors and such wide-ranging contents, a good index is all the more important for a big book like this one. Yet this index has numerous 1140331 LTJ0010.1177/02655322221140331Language TestingBook Reviews research-article2022
{"title":"Book Review: The Routledge Handbook of Language Testing","authors":"J. Norris","doi":"10.1177/02655322221140331","DOIUrl":"https://doi.org/10.1177/02655322221140331","url":null,"abstract":"The second edition of The Routledge Handbook of Language Testing, published in 2022, is a hefty volume, covering a broad swath of theory, research, and practice in language testing over some 600 + pages. Editors Glenn Fulcher and Luke Harding have done a nice job of updating the first edition, bringing in a handful of new contributions for a total of 36 chapters, re-arranging the organization somewhat to collocate topics thematically, and encouraging revisions to nearly all of the included chapters. Compiling edited volumes, never mind substantial handbooks that are intended to reflect the entire field, like this one, is never an easy or straightforward endeavor. Choices inevitably must be made about which experts to invite, what topics to include and which ones to leave out, and how to arrange the contents and situate the contributions against the backdrop of an active and evolving domain of research and practice. On the whole, this book does a good job of reflecting a lot of what is on the minds of language testing researchers and practitioners as they go about the scholarship and business of language assessment, and it does so in a reader-friendly way, with relatively brief and consistently organized chapters produced by an impressive group of experts. I believe these characteristics recommend the book for use in seminars on language testing and as an authoritative reference for a variety of language testing stakeholders—indeed, many of these chapters will help in the cause of advancing language assessment literacy in multiple sectors (if we can only encourage their being read by individuals in those sectors . . .). In the following, I highlight a few dimensions of the volume that I find particularly useful and/or insightful, and I offer some observations on aspects that might have deserved more attention or perhaps should merit attention in the next edition. The book is arranged in 10 topical sections with three to five chapters each, fronted by a brief editorial introduction and ending with a subject and author index. In the introduction, the editors do a nice job of rationalizing the different sections of the book and introducing the key contributions of the distinct chapters. They also effectively link core ideas and themes that transcend individual chapters, thereby helping readers to notice important threads that connect the different perspectives and issues covered. Dispensing with one production quibble up front, the Index is not well compiled. While no doubt a challenge with so many contributing authors and such wide-ranging contents, a good index is all the more important for a big book like this one. Yet this index has numerous 1140331 LTJ0010.1177/02655322221140331Language Testing</italic>Book Reviews research-article2022","PeriodicalId":17928,"journal":{"name":"Language Testing","volume":"40 1","pages":"440 - 449"},"PeriodicalIF":4.1,"publicationDate":"2022-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45239190","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-12-12DOI: 10.1177/02655322221135025
Sathena Chan, Lyn May
Despite the increased use of integrated tasks in high-stakes academic writing assessment, research on rating criteria which reflect the unique construct of integrated summary writing skills is comparatively rare. Using a mixed-method approach of expert judgement, text analysis, and statistical analysis, this study examines writing features that discriminate summaries produced by 150 candidates at five levels of proficiency on integrated reading-writing (R-W) and listening-writing (L-W) tasks. The expert judgement revealed a wide range of features which discriminated R-W and L-W responses. When responses at five proficiency levels were coded by these features, significant differences were obtained in seven features, including relevance of ideas, paraphrasing skills, accuracy of source information, academic style, language control, coherence and cohesion, and task fulfilment across proficiency levels on the R-W task. The same features did not yield significant differences in L-W responses across proficiency levels. The findings have important implications for clarifying the construct of integrated summary writing in different modalities, indicating the possibility of expanding integrated rating categories with some potential for translating the identified criteria into automated rating systems. The results on the L-W indicate the need for developing descriptors which can more effectively discriminate L-W responses.
{"title":"Towards more valid scoring criteria for integrated reading-writing and listening-writing summary tasks","authors":"Sathena Chan, Lyn May","doi":"10.1177/02655322221135025","DOIUrl":"https://doi.org/10.1177/02655322221135025","url":null,"abstract":"Despite the increased use of integrated tasks in high-stakes academic writing assessment, research on rating criteria which reflect the unique construct of integrated summary writing skills is comparatively rare. Using a mixed-method approach of expert judgement, text analysis, and statistical analysis, this study examines writing features that discriminate summaries produced by 150 candidates at five levels of proficiency on integrated reading-writing (R-W) and listening-writing (L-W) tasks. The expert judgement revealed a wide range of features which discriminated R-W and L-W responses. When responses at five proficiency levels were coded by these features, significant differences were obtained in seven features, including relevance of ideas, paraphrasing skills, accuracy of source information, academic style, language control, coherence and cohesion, and task fulfilment across proficiency levels on the R-W task. The same features did not yield significant differences in L-W responses across proficiency levels. The findings have important implications for clarifying the construct of integrated summary writing in different modalities, indicating the possibility of expanding integrated rating categories with some potential for translating the identified criteria into automated rating systems. The results on the L-W indicate the need for developing descriptors which can more effectively discriminate L-W responses.","PeriodicalId":17928,"journal":{"name":"Language Testing","volume":"40 1","pages":"410 - 439"},"PeriodicalIF":4.1,"publicationDate":"2022-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42928623","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-12-07DOI: 10.1177/02655322221126604
Vahid Aryadoust, Lan Luo
This study reviewed conceptualizations and operationalizations of second language (L2) listening constructs. A total of 157 peer-reviewed papers published in 19 journals in applied linguistics were coded for (1) publication year, author, source title, location, language, and reliability and (2) listening subskills, cognitive processes, attributes, and listening functions potentially measured or investigated. Only 39 publications (24.84%) provided theoretical definitions for listening constructs, 38 of which were general or had a narrow construct coverage. Listening functions such as discriminative, empathetic, and analytical listening were largely unattended to in construct conceptualization in the studies. In addition, we identified 24 subskills, 27 cognitive processes, and 54 listening attributes (total = 105) operationalized in the studies. We developed a multilayered framework to categorize these features. The subskills and cognitive processes were categorized into five principal groups each (10 groups total), while the attributes were divided into three main groups. This multicomponential framework will be useful in construct delineation and operationalization in L2 listening assessment and teaching. Finally, limitations of the extant research and future directions for research and development in L2 listening assessment are discussed.
{"title":"The typology of second language listening constructs: A systematic review","authors":"Vahid Aryadoust, Lan Luo","doi":"10.1177/02655322221126604","DOIUrl":"https://doi.org/10.1177/02655322221126604","url":null,"abstract":"This study reviewed conceptualizations and operationalizations of second language (L2) listening constructs. A total of 157 peer-reviewed papers published in 19 journals in applied linguistics were coded for (1) publication year, author, source title, location, language, and reliability and (2) listening subskills, cognitive processes, attributes, and listening functions potentially measured or investigated. Only 39 publications (24.84%) provided theoretical definitions for listening constructs, 38 of which were general or had a narrow construct coverage. Listening functions such as discriminative, empathetic, and analytical listening were largely unattended to in construct conceptualization in the studies. In addition, we identified 24 subskills, 27 cognitive processes, and 54 listening attributes (total = 105) operationalized in the studies. We developed a multilayered framework to categorize these features. The subskills and cognitive processes were categorized into five principal groups each (10 groups total), while the attributes were divided into three main groups. This multicomponential framework will be useful in construct delineation and operationalization in L2 listening assessment and teaching. Finally, limitations of the extant research and future directions for research and development in L2 listening assessment are discussed.","PeriodicalId":17928,"journal":{"name":"Language Testing","volume":"40 1","pages":"375 - 409"},"PeriodicalIF":4.1,"publicationDate":"2022-12-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44181146","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-09-21DOI: 10.1177/02655322221122774
A. Batty, T. Haug, Sarah Ebling, Katja Tissi, Sandra Sidler-Miserez
Sign languages present particular challenges to language assessors in relation to variation in signs, weakly defined citation forms, and a general lack of standard-setting work even in long-established measures of productive sign proficiency. The present article addresses and explores these issues via a mixed-methods study of a human-rated form-recall sign vocabulary test of 98 signs for beginning adult learners of Swiss German Sign Language (DSGS), using post-test qualitative rater interviews to inform interpretation of the results of quantitative analysis of the test ratings using many-facets Rasch measurement. Significant differences between two expert raters were observed on three signs. The follow-up interview revealed disagreement on the criterion of correctness, despite the raters’ involvement in the development of the base lexicon of signs. The findings highlight the challenges of using human ratings to assess the production not only of sign language vocabulary, but of minority languages generally, and underscore the need for greater effort expended on the standardization of sign language assessment.
{"title":"Challenges in rating signed production: A mixed-methods study of a Swiss German Sign Language form-recall vocabulary test","authors":"A. Batty, T. Haug, Sarah Ebling, Katja Tissi, Sandra Sidler-Miserez","doi":"10.1177/02655322221122774","DOIUrl":"https://doi.org/10.1177/02655322221122774","url":null,"abstract":"Sign languages present particular challenges to language assessors in relation to variation in signs, weakly defined citation forms, and a general lack of standard-setting work even in long-established measures of productive sign proficiency. The present article addresses and explores these issues via a mixed-methods study of a human-rated form-recall sign vocabulary test of 98 signs for beginning adult learners of Swiss German Sign Language (DSGS), using post-test qualitative rater interviews to inform interpretation of the results of quantitative analysis of the test ratings using many-facets Rasch measurement. Significant differences between two expert raters were observed on three signs. The follow-up interview revealed disagreement on the criterion of correctness, despite the raters’ involvement in the development of the base lexicon of signs. The findings highlight the challenges of using human ratings to assess the production not only of sign language vocabulary, but of minority languages generally, and underscore the need for greater effort expended on the standardization of sign language assessment.","PeriodicalId":17928,"journal":{"name":"Language Testing","volume":"40 1","pages":"352 - 374"},"PeriodicalIF":4.1,"publicationDate":"2022-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43174840","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-09-14DOI: 10.1177/02655322221117136
Yan-ping Jin
test takers have no say about the content of tests and about the decisions made based on their results; worse, they are forced to comply with the demands of tests by changing their behaviour in order to succeed on them
{"title":"Test-taker insights for language assessment policies and practices","authors":"Yan-ping Jin","doi":"10.1177/02655322221117136","DOIUrl":"https://doi.org/10.1177/02655322221117136","url":null,"abstract":"test takers have no say about the content of tests and about the decisions made based on their results; worse, they are forced to comply with the demands of tests by changing their behaviour in order to succeed on them","PeriodicalId":17928,"journal":{"name":"Language Testing","volume":"40 1","pages":"193 - 203"},"PeriodicalIF":4.1,"publicationDate":"2022-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48856564","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-09-02DOI: 10.1177/02655322221114015
D. Leontjev, A. Huhta, A. Tolvanen
Derivational morphology (DM) and how it can be assessed have been investigated relatively rarely in language learning and testing research. The goal of this study is to add to the understanding of the nature of DM knowledge, exploring whether and how it is separable from vocabulary breadth. Eight L2 (second or foreign language) English DM knowledge measures and three measures of the size of the English vocabulary were administered to 120 learners. We conducted two confirmatory factor analyses, one with one underlying factor and the other treating vocabulary breadth and DM as separate. As neither model had a satisfactory fit without introducing a residual covariance to the two-factor model, we conducted an exploratory factor analysis, which suggested two separate DM factors in addition to vocabulary breadth. Regardless, the analysis demonstrated that the DM knowledge was separate from learners’ vocabulary breadth. However, learners’ vocabulary breadth factor still explained a substantial amount of variance in learners’ performance on DM measures. We discuss theoretical implications and implications for L2 assessment.
{"title":"L2 English vocabulary breadth and knowledge of derivational morphology: One or two constructs?","authors":"D. Leontjev, A. Huhta, A. Tolvanen","doi":"10.1177/02655322221114015","DOIUrl":"https://doi.org/10.1177/02655322221114015","url":null,"abstract":"Derivational morphology (DM) and how it can be assessed have been investigated relatively rarely in language learning and testing research. The goal of this study is to add to the understanding of the nature of DM knowledge, exploring whether and how it is separable from vocabulary breadth. Eight L2 (second or foreign language) English DM knowledge measures and three measures of the size of the English vocabulary were administered to 120 learners. We conducted two confirmatory factor analyses, one with one underlying factor and the other treating vocabulary breadth and DM as separate. As neither model had a satisfactory fit without introducing a residual covariance to the two-factor model, we conducted an exploratory factor analysis, which suggested two separate DM factors in addition to vocabulary breadth. Regardless, the analysis demonstrated that the DM knowledge was separate from learners’ vocabulary breadth. However, learners’ vocabulary breadth factor still explained a substantial amount of variance in learners’ performance on DM measures. We discuss theoretical implications and implications for L2 assessment.","PeriodicalId":17928,"journal":{"name":"Language Testing","volume":"40 1","pages":"300 - 324"},"PeriodicalIF":4.1,"publicationDate":"2022-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44284962","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}