Pub Date : 2023-01-01DOI: 10.1177/02655322221125698
J. Read
Published work on vocabulary assessment has grown substantially in the last 10 years, but it is still somewhat outside the mainstream of the field. There has been a recent call for those developing vocabulary tests to apply professional standards to their work, especially in validating their instruments for specified purposes before releasing them for widespread use. A great deal of work on vocabulary assessment can be seen in terms of the somewhat problematic distinction between breadth and depth of vocabulary knowledge. Breadth refers to assessing vocabulary size, based on a large sample of words from a frequency list. New research is raising questions about the suitability of word frequency norms derived from large corpora, the choice of the word family as the unit of analysis, the selection of appropriate test formats, and the role of guessing in test-taker performance. Depth of knowledge goes beyond the basic form-meaning link to consider other aspects of word knowledge. The concept of word association has played a dominant role in the design of such tests, but there is a need to create test formats to assess knowledge of word parts as well as a range of multi-word items apart from collocation.
{"title":"Towards a new sophistication in vocabulary assessment","authors":"J. Read","doi":"10.1177/02655322221125698","DOIUrl":"https://doi.org/10.1177/02655322221125698","url":null,"abstract":"Published work on vocabulary assessment has grown substantially in the last 10 years, but it is still somewhat outside the mainstream of the field. There has been a recent call for those developing vocabulary tests to apply professional standards to their work, especially in validating their instruments for specified purposes before releasing them for widespread use. A great deal of work on vocabulary assessment can be seen in terms of the somewhat problematic distinction between breadth and depth of vocabulary knowledge. Breadth refers to assessing vocabulary size, based on a large sample of words from a frequency list. New research is raising questions about the suitability of word frequency norms derived from large corpora, the choice of the word family as the unit of analysis, the selection of appropriate test formats, and the role of guessing in test-taker performance. Depth of knowledge goes beyond the basic form-meaning link to consider other aspects of word knowledge. The concept of word association has played a dominant role in the design of such tests, but there is a need to create test formats to assess knowledge of word parts as well as a range of multi-word items apart from collocation.","PeriodicalId":17928,"journal":{"name":"Language Testing","volume":null,"pages":null},"PeriodicalIF":4.1,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48362352","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01DOI: 10.1177/02655322221127896
Tineke Brunfaut
In this invited Viewpoint on the occasion of the 40th anniversary of the journal Language Testing, I argue that at the core of future challenges and opportunities for the field—both in scholarly and operational respects—remain basic questions and principles in language testing and assessment. Despite the high levels of sophistication of issues looked into, and methodological and operational solutions found, outstanding concerns still amount to: what are we testing, how are we testing, and why are we testing? Guided by these questions, I call for more thorough and adequate language use domain definitions (and a suitable broadening of research and testing methodologies to determine these), more comprehensive operationalizations of these domain definitions (especially in the context of technology in language testing), and deeper considerations of test purposes/uses and of their connections with domain definitions. To achieve this, I maintain that the field needs to continue investing in the topics of validation, ethics, and language assessment literacy, and engaging with broader fields of enquiry such as (applied) linguistics. I also encourage a more synthetic look at the existing knowledge base in order to build on this, and further diversification of voices in language testing and assessment research and practice.
{"title":"Future challenges and opportunities in language testing and assessment: Basic questions and principles at the forefront","authors":"Tineke Brunfaut","doi":"10.1177/02655322221127896","DOIUrl":"https://doi.org/10.1177/02655322221127896","url":null,"abstract":"In this invited Viewpoint on the occasion of the 40th anniversary of the journal Language Testing, I argue that at the core of future challenges and opportunities for the field—both in scholarly and operational respects—remain basic questions and principles in language testing and assessment. Despite the high levels of sophistication of issues looked into, and methodological and operational solutions found, outstanding concerns still amount to: what are we testing, how are we testing, and why are we testing? Guided by these questions, I call for more thorough and adequate language use domain definitions (and a suitable broadening of research and testing methodologies to determine these), more comprehensive operationalizations of these domain definitions (especially in the context of technology in language testing), and deeper considerations of test purposes/uses and of their connections with domain definitions. To achieve this, I maintain that the field needs to continue investing in the topics of validation, ethics, and language assessment literacy, and engaging with broader fields of enquiry such as (applied) linguistics. I also encourage a more synthetic look at the existing knowledge base in order to build on this, and further diversification of voices in language testing and assessment research and practice.","PeriodicalId":17928,"journal":{"name":"Language Testing","volume":null,"pages":null},"PeriodicalIF":4.1,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43816042","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01DOI: 10.1177/02655322221127365
A. Ginther
Great opportunities for language testing practitioners are enabled through language program administration. Local language tests lend themselves to multiple purposes—for placement and diagnosis, as a means of tracking progress, and as a contribution to program evaluation and revision. Administrative choices, especially those involving a test, are strategic and can be used to transform a program’s identity and effectiveness over time.
{"title":"Administration, labor, and love","authors":"A. Ginther","doi":"10.1177/02655322221127365","DOIUrl":"https://doi.org/10.1177/02655322221127365","url":null,"abstract":"Great opportunities for language testing practitioners are enabled through language program administration. Local language tests lend themselves to multiple purposes—for placement and diagnosis, as a means of tracking progress, and as a contribution to program evaluation and revision. Administrative choices, especially those involving a test, are strategic and can be used to transform a program’s identity and effectiveness over time.","PeriodicalId":17928,"journal":{"name":"Language Testing","volume":null,"pages":null},"PeriodicalIF":4.1,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43447832","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01DOI: 10.1177/02655322221126607
J. Burton
In its 40th year, Language Testing journal has served as the flagship journal for scholars, researchers, and practitioners in the field of language testing and assessment. This viewpoint piece, written from the perspective of an emerging scholar, discusses two possible future trends based on evidence going back to the very first issue of this journal. First, this paper outlines past efforts to describe and define the construct of second language communication, noting that much work has yet to be done for a more complete description in terms of interactional competence and nonverbal behavior. The second trend highlights the growing movement in applied linguistics toward research transparency through Open Science practices, including replication studies, the sharing of data and materials, and preregistration. This paper outlines work to date in Language Testing that encourages open practices and emphasizes the importance of these practices in assessment research.
{"title":"Reflections on the past and future of language testing and assessment: An emerging scholar’s perspective","authors":"J. Burton","doi":"10.1177/02655322221126607","DOIUrl":"https://doi.org/10.1177/02655322221126607","url":null,"abstract":"In its 40th year, Language Testing journal has served as the flagship journal for scholars, researchers, and practitioners in the field of language testing and assessment. This viewpoint piece, written from the perspective of an emerging scholar, discusses two possible future trends based on evidence going back to the very first issue of this journal. First, this paper outlines past efforts to describe and define the construct of second language communication, noting that much work has yet to be done for a more complete description in terms of interactional competence and nonverbal behavior. The second trend highlights the growing movement in applied linguistics toward research transparency through Open Science practices, including replication studies, the sharing of data and materials, and preregistration. This paper outlines work to date in Language Testing that encourages open practices and emphasizes the importance of these practices in assessment research.","PeriodicalId":17928,"journal":{"name":"Language Testing","volume":null,"pages":null},"PeriodicalIF":4.1,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47094561","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01DOI: 10.1177/02655322221127421
L. Taylor
As applied linguists and language testers, we are in the business of “doing language”. For many of us, language learning is a lifelong passion, and we invest similar enthusiasm in our language assessment research and testing practices. Language is also the vehicle through which we communicate that enthusiasm to others, sharing our knowledge and experience with colleagues so we can all grow in understanding and expertise. We are actually quite good at communicating within our own community. But when it comes to interacting with people beyond our own field, are we such effective communicators? Wider society—politicians, journalists, policymakers, social commentators, teachers, and parents—all seem to find assessment matters hard to grasp. And I am not sure we as language testers do much to help them. So I find myself wondering why that is? Is it that our language is too specialised, or overly technical? Do we choose unhelpful words or images when we talk about testing? Worse still, do we sometimes come across as rather arrogant or patronising, perhaps even irrelevant to non-specialists’ needs and concerns? If so, could we perhaps consider reframing our discourse and rhetoric in future to improve our communicative effectiveness, and how might we do that?
{"title":"Reframing the discourse and rhetoric of language testing and assessment for the public square","authors":"L. Taylor","doi":"10.1177/02655322221127421","DOIUrl":"https://doi.org/10.1177/02655322221127421","url":null,"abstract":"As applied linguists and language testers, we are in the business of “doing language”. For many of us, language learning is a lifelong passion, and we invest similar enthusiasm in our language assessment research and testing practices. Language is also the vehicle through which we communicate that enthusiasm to others, sharing our knowledge and experience with colleagues so we can all grow in understanding and expertise. We are actually quite good at communicating within our own community. But when it comes to interacting with people beyond our own field, are we such effective communicators? Wider society—politicians, journalists, policymakers, social commentators, teachers, and parents—all seem to find assessment matters hard to grasp. And I am not sure we as language testers do much to help them. So I find myself wondering why that is? Is it that our language is too specialised, or overly technical? Do we choose unhelpful words or images when we talk about testing? Worse still, do we sometimes come across as rather arrogant or patronising, perhaps even irrelevant to non-specialists’ needs and concerns? If so, could we perhaps consider reframing our discourse and rhetoric in future to improve our communicative effectiveness, and how might we do that?","PeriodicalId":17928,"journal":{"name":"Language Testing","volume":null,"pages":null},"PeriodicalIF":4.1,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47000811","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-12-27DOI: 10.1177/02655322221134218
Salomé Villa Larenas, Tineke Brunfaut
Research has shown that language teachers typically feel underprepared for assessment aspects of their job. One reason may relate to how teacher education programmes prepare future teachers in this area. Research insights into how and to what extent teacher educators train future language teachers in language assessment matters are scarce, however, as are insights into the language assessment literacy (LAL) of the teacher educators themselves. Additionally, while increasingly research insights are available on components that constitute LAL, how such components interrelate is largely unexplored. To help address these research gaps, we investigated the LAL of English as a Foreign Language teacher educators in Chile. Through interviews with 20 teacher educators and analysis of their language assessment materials, five LAL components were identified (language assessment knowledge, conceptions, context, practices, and learning), and two by-products of LAL (language assessor identity and self-efficacy). The components were found to interrelate in a complex manner, which we visualized with a model of concentric oval shapes, depicting how LAL is socially constructed (and re-constructed) from and for the specific context in which teacher educators’ practices are immersed. We discuss implications for LAL conceptualisations and for LAL research methodology.
{"title":"But who trains the language teacher educator who trains the language teacher? An empirical investigation of Chilean EFL teacher educators’ language assessment literacy","authors":"Salomé Villa Larenas, Tineke Brunfaut","doi":"10.1177/02655322221134218","DOIUrl":"https://doi.org/10.1177/02655322221134218","url":null,"abstract":"Research has shown that language teachers typically feel underprepared for assessment aspects of their job. One reason may relate to how teacher education programmes prepare future teachers in this area. Research insights into how and to what extent teacher educators train future language teachers in language assessment matters are scarce, however, as are insights into the language assessment literacy (LAL) of the teacher educators themselves. Additionally, while increasingly research insights are available on components that constitute LAL, how such components interrelate is largely unexplored. To help address these research gaps, we investigated the LAL of English as a Foreign Language teacher educators in Chile. Through interviews with 20 teacher educators and analysis of their language assessment materials, five LAL components were identified (language assessment knowledge, conceptions, context, practices, and learning), and two by-products of LAL (language assessor identity and self-efficacy). The components were found to interrelate in a complex manner, which we visualized with a model of concentric oval shapes, depicting how LAL is socially constructed (and re-constructed) from and for the specific context in which teacher educators’ practices are immersed. We discuss implications for LAL conceptualisations and for LAL research methodology.","PeriodicalId":17928,"journal":{"name":"Language Testing","volume":null,"pages":null},"PeriodicalIF":4.1,"publicationDate":"2022-12-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46849230","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-12-16DOI: 10.1177/02655322221140012
Zhiqing Lin, Huilin Chen
{"title":"Book Review: An Introduction to the Rasch Model with Examples in R","authors":"Zhiqing Lin, Huilin Chen","doi":"10.1177/02655322221140012","DOIUrl":"https://doi.org/10.1177/02655322221140012","url":null,"abstract":"","PeriodicalId":17928,"journal":{"name":"Language Testing","volume":null,"pages":null},"PeriodicalIF":4.1,"publicationDate":"2022-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46270092","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-12-16DOI: 10.1177/02655322221140331
J. Norris
The second edition of The Routledge Handbook of Language Testing, published in 2022, is a hefty volume, covering a broad swath of theory, research, and practice in language testing over some 600 + pages. Editors Glenn Fulcher and Luke Harding have done a nice job of updating the first edition, bringing in a handful of new contributions for a total of 36 chapters, re-arranging the organization somewhat to collocate topics thematically, and encouraging revisions to nearly all of the included chapters. Compiling edited volumes, never mind substantial handbooks that are intended to reflect the entire field, like this one, is never an easy or straightforward endeavor. Choices inevitably must be made about which experts to invite, what topics to include and which ones to leave out, and how to arrange the contents and situate the contributions against the backdrop of an active and evolving domain of research and practice. On the whole, this book does a good job of reflecting a lot of what is on the minds of language testing researchers and practitioners as they go about the scholarship and business of language assessment, and it does so in a reader-friendly way, with relatively brief and consistently organized chapters produced by an impressive group of experts. I believe these characteristics recommend the book for use in seminars on language testing and as an authoritative reference for a variety of language testing stakeholders—indeed, many of these chapters will help in the cause of advancing language assessment literacy in multiple sectors (if we can only encourage their being read by individuals in those sectors . . .). In the following, I highlight a few dimensions of the volume that I find particularly useful and/or insightful, and I offer some observations on aspects that might have deserved more attention or perhaps should merit attention in the next edition. The book is arranged in 10 topical sections with three to five chapters each, fronted by a brief editorial introduction and ending with a subject and author index. In the introduction, the editors do a nice job of rationalizing the different sections of the book and introducing the key contributions of the distinct chapters. They also effectively link core ideas and themes that transcend individual chapters, thereby helping readers to notice important threads that connect the different perspectives and issues covered. Dispensing with one production quibble up front, the Index is not well compiled. While no doubt a challenge with so many contributing authors and such wide-ranging contents, a good index is all the more important for a big book like this one. Yet this index has numerous 1140331 LTJ0010.1177/02655322221140331Language TestingBook Reviews research-article2022
{"title":"Book Review: The Routledge Handbook of Language Testing","authors":"J. Norris","doi":"10.1177/02655322221140331","DOIUrl":"https://doi.org/10.1177/02655322221140331","url":null,"abstract":"The second edition of The Routledge Handbook of Language Testing, published in 2022, is a hefty volume, covering a broad swath of theory, research, and practice in language testing over some 600 + pages. Editors Glenn Fulcher and Luke Harding have done a nice job of updating the first edition, bringing in a handful of new contributions for a total of 36 chapters, re-arranging the organization somewhat to collocate topics thematically, and encouraging revisions to nearly all of the included chapters. Compiling edited volumes, never mind substantial handbooks that are intended to reflect the entire field, like this one, is never an easy or straightforward endeavor. Choices inevitably must be made about which experts to invite, what topics to include and which ones to leave out, and how to arrange the contents and situate the contributions against the backdrop of an active and evolving domain of research and practice. On the whole, this book does a good job of reflecting a lot of what is on the minds of language testing researchers and practitioners as they go about the scholarship and business of language assessment, and it does so in a reader-friendly way, with relatively brief and consistently organized chapters produced by an impressive group of experts. I believe these characteristics recommend the book for use in seminars on language testing and as an authoritative reference for a variety of language testing stakeholders—indeed, many of these chapters will help in the cause of advancing language assessment literacy in multiple sectors (if we can only encourage their being read by individuals in those sectors . . .). In the following, I highlight a few dimensions of the volume that I find particularly useful and/or insightful, and I offer some observations on aspects that might have deserved more attention or perhaps should merit attention in the next edition. The book is arranged in 10 topical sections with three to five chapters each, fronted by a brief editorial introduction and ending with a subject and author index. In the introduction, the editors do a nice job of rationalizing the different sections of the book and introducing the key contributions of the distinct chapters. They also effectively link core ideas and themes that transcend individual chapters, thereby helping readers to notice important threads that connect the different perspectives and issues covered. Dispensing with one production quibble up front, the Index is not well compiled. While no doubt a challenge with so many contributing authors and such wide-ranging contents, a good index is all the more important for a big book like this one. Yet this index has numerous 1140331 LTJ0010.1177/02655322221140331Language Testing</italic>Book Reviews research-article2022","PeriodicalId":17928,"journal":{"name":"Language Testing","volume":null,"pages":null},"PeriodicalIF":4.1,"publicationDate":"2022-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45239190","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-12-12DOI: 10.1177/02655322221135025
Sathena Chan, Lyn May
Despite the increased use of integrated tasks in high-stakes academic writing assessment, research on rating criteria which reflect the unique construct of integrated summary writing skills is comparatively rare. Using a mixed-method approach of expert judgement, text analysis, and statistical analysis, this study examines writing features that discriminate summaries produced by 150 candidates at five levels of proficiency on integrated reading-writing (R-W) and listening-writing (L-W) tasks. The expert judgement revealed a wide range of features which discriminated R-W and L-W responses. When responses at five proficiency levels were coded by these features, significant differences were obtained in seven features, including relevance of ideas, paraphrasing skills, accuracy of source information, academic style, language control, coherence and cohesion, and task fulfilment across proficiency levels on the R-W task. The same features did not yield significant differences in L-W responses across proficiency levels. The findings have important implications for clarifying the construct of integrated summary writing in different modalities, indicating the possibility of expanding integrated rating categories with some potential for translating the identified criteria into automated rating systems. The results on the L-W indicate the need for developing descriptors which can more effectively discriminate L-W responses.
{"title":"Towards more valid scoring criteria for integrated reading-writing and listening-writing summary tasks","authors":"Sathena Chan, Lyn May","doi":"10.1177/02655322221135025","DOIUrl":"https://doi.org/10.1177/02655322221135025","url":null,"abstract":"Despite the increased use of integrated tasks in high-stakes academic writing assessment, research on rating criteria which reflect the unique construct of integrated summary writing skills is comparatively rare. Using a mixed-method approach of expert judgement, text analysis, and statistical analysis, this study examines writing features that discriminate summaries produced by 150 candidates at five levels of proficiency on integrated reading-writing (R-W) and listening-writing (L-W) tasks. The expert judgement revealed a wide range of features which discriminated R-W and L-W responses. When responses at five proficiency levels were coded by these features, significant differences were obtained in seven features, including relevance of ideas, paraphrasing skills, accuracy of source information, academic style, language control, coherence and cohesion, and task fulfilment across proficiency levels on the R-W task. The same features did not yield significant differences in L-W responses across proficiency levels. The findings have important implications for clarifying the construct of integrated summary writing in different modalities, indicating the possibility of expanding integrated rating categories with some potential for translating the identified criteria into automated rating systems. The results on the L-W indicate the need for developing descriptors which can more effectively discriminate L-W responses.","PeriodicalId":17928,"journal":{"name":"Language Testing","volume":null,"pages":null},"PeriodicalIF":4.1,"publicationDate":"2022-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42928623","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-12-07DOI: 10.1177/02655322221126604
Vahid Aryadoust, Lan Luo
This study reviewed conceptualizations and operationalizations of second language (L2) listening constructs. A total of 157 peer-reviewed papers published in 19 journals in applied linguistics were coded for (1) publication year, author, source title, location, language, and reliability and (2) listening subskills, cognitive processes, attributes, and listening functions potentially measured or investigated. Only 39 publications (24.84%) provided theoretical definitions for listening constructs, 38 of which were general or had a narrow construct coverage. Listening functions such as discriminative, empathetic, and analytical listening were largely unattended to in construct conceptualization in the studies. In addition, we identified 24 subskills, 27 cognitive processes, and 54 listening attributes (total = 105) operationalized in the studies. We developed a multilayered framework to categorize these features. The subskills and cognitive processes were categorized into five principal groups each (10 groups total), while the attributes were divided into three main groups. This multicomponential framework will be useful in construct delineation and operationalization in L2 listening assessment and teaching. Finally, limitations of the extant research and future directions for research and development in L2 listening assessment are discussed.
{"title":"The typology of second language listening constructs: A systematic review","authors":"Vahid Aryadoust, Lan Luo","doi":"10.1177/02655322221126604","DOIUrl":"https://doi.org/10.1177/02655322221126604","url":null,"abstract":"This study reviewed conceptualizations and operationalizations of second language (L2) listening constructs. A total of 157 peer-reviewed papers published in 19 journals in applied linguistics were coded for (1) publication year, author, source title, location, language, and reliability and (2) listening subskills, cognitive processes, attributes, and listening functions potentially measured or investigated. Only 39 publications (24.84%) provided theoretical definitions for listening constructs, 38 of which were general or had a narrow construct coverage. Listening functions such as discriminative, empathetic, and analytical listening were largely unattended to in construct conceptualization in the studies. In addition, we identified 24 subskills, 27 cognitive processes, and 54 listening attributes (total = 105) operationalized in the studies. We developed a multilayered framework to categorize these features. The subskills and cognitive processes were categorized into five principal groups each (10 groups total), while the attributes were divided into three main groups. This multicomponential framework will be useful in construct delineation and operationalization in L2 listening assessment and teaching. Finally, limitations of the extant research and future directions for research and development in L2 listening assessment are discussed.","PeriodicalId":17928,"journal":{"name":"Language Testing","volume":null,"pages":null},"PeriodicalIF":4.1,"publicationDate":"2022-12-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44181146","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}