Pub Date : 2022-12-10DOI: 10.1080/15434303.2022.2155165
Youjun Tang, Xiaomei Ma
ABSTRACT This article explores the value of dynamic assessment (DA) for college English writing (CEW), a required course for millions of students in China that typically enrolls 50 students in each class. An interventionist approach to DA, in which mediation and administration are standardized, was selected and supplemented with a construct-descriptor-based rating checklist as a writing assessment before and after an eight-week instruction phase. The DA group received graduated mediation that focused on the constructs and descriptors from the scale, while a control group received holistic corrections. Data were processed through ANOVA and MANCOVA revealing variable development concerning specific constructs but overall significantly greater improvement by the DA group. The results are interpreted according to the degree of change as indicative of the zone of proximal development. The value of the construct-driven scale and associated descriptors through the mediational process are also discussed. It is argued that interventionist DA is equipped to identify the components and processes within a construct, and in so doing offers the possibility of fine-tuning teachers’ and learners’ understanding of problem areas for individuals.
{"title":"An Interventionist Dynamic Assessment Approach to College English Writing in China","authors":"Youjun Tang, Xiaomei Ma","doi":"10.1080/15434303.2022.2155165","DOIUrl":"https://doi.org/10.1080/15434303.2022.2155165","url":null,"abstract":"ABSTRACT This article explores the value of dynamic assessment (DA) for college English writing (CEW), a required course for millions of students in China that typically enrolls 50 students in each class. An interventionist approach to DA, in which mediation and administration are standardized, was selected and supplemented with a construct-descriptor-based rating checklist as a writing assessment before and after an eight-week instruction phase. The DA group received graduated mediation that focused on the constructs and descriptors from the scale, while a control group received holistic corrections. Data were processed through ANOVA and MANCOVA revealing variable development concerning specific constructs but overall significantly greater improvement by the DA group. The results are interpreted according to the degree of change as indicative of the zone of proximal development. The value of the construct-driven scale and associated descriptors through the mediational process are also discussed. It is argued that interventionist DA is equipped to identify the components and processes within a construct, and in so doing offers the possibility of fine-tuning teachers’ and learners’ understanding of problem areas for individuals.","PeriodicalId":46873,"journal":{"name":"Language Assessment Quarterly","volume":null,"pages":null},"PeriodicalIF":2.9,"publicationDate":"2022-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44654785","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-12-04DOI: 10.1080/15434303.2022.2151911
Zhijun Sun, Peng Xu, Jianqi Wang
ABSTRACT The construct of learning potential has been proposed to capture differences between learner independent performance and performance during Dynamic Assessment (DA). This paper introduces a new LPS formula implemented in a DA study involving Pakistani learners of L2 Chinese. Learners were randomly assigned to a control or experimental group and administered a pre-, post-, and more difficult transfer test, each focused on verb-resultative constructions. Use of the new learning potential score (LPS) formula allowed for greater differentiation of learner trajectories.
{"title":"Dynamic Assessment of the Learning Potential of Chinese as a Second Language","authors":"Zhijun Sun, Peng Xu, Jianqi Wang","doi":"10.1080/15434303.2022.2151911","DOIUrl":"https://doi.org/10.1080/15434303.2022.2151911","url":null,"abstract":"ABSTRACT The construct of learning potential has been proposed to capture differences between learner independent performance and performance during Dynamic Assessment (DA). This paper introduces a new LPS formula implemented in a DA study involving Pakistani learners of L2 Chinese. Learners were randomly assigned to a control or experimental group and administered a pre-, post-, and more difficult transfer test, each focused on verb-resultative constructions. Use of the new learning potential score (LPS) formula allowed for greater differentiation of learner trajectories.","PeriodicalId":46873,"journal":{"name":"Language Assessment Quarterly","volume":null,"pages":null},"PeriodicalIF":2.9,"publicationDate":"2022-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44186079","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-12-01DOI: 10.1080/15434303.2022.2153050
Yaqing Liang, Yanzhi Li, Zhonggang Sang
ABSTRACT This study investigated how peer-mediated Dynamic Assessment (DA) unfolded in translation revision competence (TRC) of students of Master’s degree of Translation and Interpreting (MTI) in China. Thirty subjects first completed three revision tasks and were then rated as high- or low-level performers according to their average scores across the first two tasks. Students were subsequently assigned to either the role of learner or peer mediator. Peer mediators received training in a graduated prompts approach to DA to learn how to provide their peers with mediation. Peer mediation sessions were conducted with the mediators and the learners paired at random and directed to jointly review their third revision. After that, all participants re-revised their last texts with their justifications and were interviewed about their attitudes towards peer interaction and their progress in TRC. Diagnosis of TRC comprised scores of the first two revisions as well as the third revision following peer mediation, with this latter score indicating responsiveness to mediation and interpreted as the Zone of Proximal Development. The findings indicated that peer mediation may help improve both mediators’ and learners’ TRC, yet other potential factors at work should not be ignored. The peer engagement process allowed participants to improve their TRC in terms of justification and interpersonal skills. This research explored the application of DA in translation training and provided a process-oriented evaluation for translation studies.
{"title":"A Study on Peer Mediation in Dynamic Assessment of Translation Revision Competence","authors":"Yaqing Liang, Yanzhi Li, Zhonggang Sang","doi":"10.1080/15434303.2022.2153050","DOIUrl":"https://doi.org/10.1080/15434303.2022.2153050","url":null,"abstract":"ABSTRACT This study investigated how peer-mediated Dynamic Assessment (DA) unfolded in translation revision competence (TRC) of students of Master’s degree of Translation and Interpreting (MTI) in China. Thirty subjects first completed three revision tasks and were then rated as high- or low-level performers according to their average scores across the first two tasks. Students were subsequently assigned to either the role of learner or peer mediator. Peer mediators received training in a graduated prompts approach to DA to learn how to provide their peers with mediation. Peer mediation sessions were conducted with the mediators and the learners paired at random and directed to jointly review their third revision. After that, all participants re-revised their last texts with their justifications and were interviewed about their attitudes towards peer interaction and their progress in TRC. Diagnosis of TRC comprised scores of the first two revisions as well as the third revision following peer mediation, with this latter score indicating responsiveness to mediation and interpreted as the Zone of Proximal Development. The findings indicated that peer mediation may help improve both mediators’ and learners’ TRC, yet other potential factors at work should not be ignored. The peer engagement process allowed participants to improve their TRC in terms of justification and interpersonal skills. This research explored the application of DA in translation training and provided a process-oriented evaluation for translation studies.","PeriodicalId":46873,"journal":{"name":"Language Assessment Quarterly","volume":null,"pages":null},"PeriodicalIF":2.9,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44672742","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-11-23DOI: 10.1080/15434303.2022.2143680
J. Burton
ABSTRACT The effects of question or task complexity on second language speaking have traditionally been investigated using complexity, accuracy, and fluency measures. Response processes in speaking tests, however, may manifest in other ways, such as through nonverbal behavior. Eye behavior, in the form of averted gaze or blinking frequency, has been found to play an important role in regulating information in studies on human cognition, and it may therefore be an important subconscious signal of test question difficulty in language testing. In this study, 15 CEFR B2/C1-level-English learners took a Zoom-based English test with ten questions spanning six CEFR complexity levels. The participants’ eye behaviors were recorded and analyzed between the moment the test question ended and the beginning of their response. The participants additionally provided self-report data on their perceptions of test-question difficulty. Results indicated that as test questions increased in difficulty, participants were more likely to avert their gaze from the interlocutor. They did not, however, blink more frequently as difficulty changed. These results have methodological implications for research on test validation and the study of nonverbal behavior in speaking tests.
{"title":"Gazing into Cognition: Eye Behavior in Online L2 Speaking Tests","authors":"J. Burton","doi":"10.1080/15434303.2022.2143680","DOIUrl":"https://doi.org/10.1080/15434303.2022.2143680","url":null,"abstract":"ABSTRACT The effects of question or task complexity on second language speaking have traditionally been investigated using complexity, accuracy, and fluency measures. Response processes in speaking tests, however, may manifest in other ways, such as through nonverbal behavior. Eye behavior, in the form of averted gaze or blinking frequency, has been found to play an important role in regulating information in studies on human cognition, and it may therefore be an important subconscious signal of test question difficulty in language testing. In this study, 15 CEFR B2/C1-level-English learners took a Zoom-based English test with ten questions spanning six CEFR complexity levels. The participants’ eye behaviors were recorded and analyzed between the moment the test question ended and the beginning of their response. The participants additionally provided self-report data on their perceptions of test-question difficulty. Results indicated that as test questions increased in difficulty, participants were more likely to avert their gaze from the interlocutor. They did not, however, blink more frequently as difficulty changed. These results have methodological implications for research on test validation and the study of nonverbal behavior in speaking tests.","PeriodicalId":46873,"journal":{"name":"Language Assessment Quarterly","volume":null,"pages":null},"PeriodicalIF":2.9,"publicationDate":"2022-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47719734","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-11-15DOI: 10.1080/15434303.2022.2147072
Scott E. Grapin
ABSTRACT In this article, I argue for expanding what “counts” as evidence of content learning in the assessment of English learners (ELs) and their peers in the content areas. ELs bring expansive meaning-making resources to content classrooms that are valuable assets for meeting the ambitious learning goals of the latest K-12 education reform. Traditionally, however, the assessment of ELs in the content areas (e.g., science, language arts) has been pursued in restrictive ways, with a narrow focus on demonstrating learning through the written language modality and independent performance. This disconnect between the expansive meaning-making resources of ELs and the restrictive nature of content assessments limits ELs’ opportunities to demonstrate what they know and can do and ultimately serves to perpetuate the deficit views of these students. I begin by providing contextual background on classroom assessment aligned to the latest standards in U.S. K-12 education. Then, I present two studies that illustrate two different expansive assessment approaches with ELs in elementary science: (a) multimodal assessment and (b) dynamic assessment. Finally, I highlight synergies of these studies with related research efforts across diverse contexts, toward the goal of developing a collective vision of expansive assessment that leverages ELs’ expansive ways of making meaning.
{"title":"Assessment of English Learners and Their Peers in the Content Areas: Expanding What “Counts” as Evidence of Content Learning","authors":"Scott E. Grapin","doi":"10.1080/15434303.2022.2147072","DOIUrl":"https://doi.org/10.1080/15434303.2022.2147072","url":null,"abstract":"ABSTRACT In this article, I argue for expanding what “counts” as evidence of content learning in the assessment of English learners (ELs) and their peers in the content areas. ELs bring expansive meaning-making resources to content classrooms that are valuable assets for meeting the ambitious learning goals of the latest K-12 education reform. Traditionally, however, the assessment of ELs in the content areas (e.g., science, language arts) has been pursued in restrictive ways, with a narrow focus on demonstrating learning through the written language modality and independent performance. This disconnect between the expansive meaning-making resources of ELs and the restrictive nature of content assessments limits ELs’ opportunities to demonstrate what they know and can do and ultimately serves to perpetuate the deficit views of these students. I begin by providing contextual background on classroom assessment aligned to the latest standards in U.S. K-12 education. Then, I present two studies that illustrate two different expansive assessment approaches with ELs in elementary science: (a) multimodal assessment and (b) dynamic assessment. Finally, I highlight synergies of these studies with related research efforts across diverse contexts, toward the goal of developing a collective vision of expansive assessment that leverages ELs’ expansive ways of making meaning.","PeriodicalId":46873,"journal":{"name":"Language Assessment Quarterly","volume":null,"pages":null},"PeriodicalIF":2.9,"publicationDate":"2022-11-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44390955","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-11-07DOI: 10.1080/15434303.2022.2140050
Huilin Chen, Yuyang Cai, J. de la Torre
ABSTRACT This study uses a cognitive diagnosis model (CDM) approach to investigate the associations among specific L2 reading subskills. Participants include 1,203 Year-4 English major college students randomly selected from the nationwide test takers of Band 8 of Test for English Majors (TEM8), a large-scale English proficiency test for senior English majors in China. Their English reading was measured using a reading comprehension subtest of the TEM8. Based on the CDM output on latent class size estimates, the chi-square test of independence was used to uncover the associations among reading subskills, and odds ratio estimation was used to determine the strengths of those associations. The CDM output on attribute mastery prevalence was used to establish the stochastic direction of the associations between reading subskills. The study has the following findings: a reading subskill network displaying significant subskill associations together with their strengths and directions can be established through a CDM approach, and the patterns of reading subskill associations based on cognitive levels and local/global comprehension resonate with major reading process models and reflect the hierarchical and compensatory characteristics of reading subskills.
{"title":"Investigating Second Language (L2) Reading Subskill Associations: A Cognitive Diagnosis Approach","authors":"Huilin Chen, Yuyang Cai, J. de la Torre","doi":"10.1080/15434303.2022.2140050","DOIUrl":"https://doi.org/10.1080/15434303.2022.2140050","url":null,"abstract":"ABSTRACT This study uses a cognitive diagnosis model (CDM) approach to investigate the associations among specific L2 reading subskills. Participants include 1,203 Year-4 English major college students randomly selected from the nationwide test takers of Band 8 of Test for English Majors (TEM8), a large-scale English proficiency test for senior English majors in China. Their English reading was measured using a reading comprehension subtest of the TEM8. Based on the CDM output on latent class size estimates, the chi-square test of independence was used to uncover the associations among reading subskills, and odds ratio estimation was used to determine the strengths of those associations. The CDM output on attribute mastery prevalence was used to establish the stochastic direction of the associations between reading subskills. The study has the following findings: a reading subskill network displaying significant subskill associations together with their strengths and directions can be established through a CDM approach, and the patterns of reading subskill associations based on cognitive levels and local/global comprehension resonate with major reading process models and reflect the hierarchical and compensatory characteristics of reading subskills.","PeriodicalId":46873,"journal":{"name":"Language Assessment Quarterly","volume":null,"pages":null},"PeriodicalIF":2.9,"publicationDate":"2022-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43332686","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-10-28DOI: 10.1080/15434303.2022.2132160
Yanfeng Yang, David D. Qian
ABSTRACT Built on Vygotsky’s Sociocultural Theory, Dynamic Assessment (DA) integrates teaching and assessment through mediator-learner interactions to promote learner development. This study employed interactionist DA to diagnose Chinese university EFL learners’ reading difficulties and promote their reading proficiency in a seven-week study. The design included a pre-test, a four-week Enrichment Program, a post-test, and a transfer test. Five learners completed each test both in a non-dynamic (NDA) and DA form. The learners’ individual interactions with a mediator in DA were recorded, transcribed and analyzed via Nvivo. In addition, the learners’ independent performances (IPs) on the NDA and DA, difficulties encountered in the process, the mediator’s prompts provided for the learners, and the learners’ mediated performances (MPs) were all identified and analyzed. Comparisons of the learners’ IPs and MPs across the tests showed that DA contributed to learners’ reading proficiency development, and this progress was evident both in their post-test IPs and MPs.
{"title":"Enhancing EFL Learners’ Reading Proficiency through Dynamic Assessment","authors":"Yanfeng Yang, David D. Qian","doi":"10.1080/15434303.2022.2132160","DOIUrl":"https://doi.org/10.1080/15434303.2022.2132160","url":null,"abstract":"ABSTRACT Built on Vygotsky’s Sociocultural Theory, Dynamic Assessment (DA) integrates teaching and assessment through mediator-learner interactions to promote learner development. This study employed interactionist DA to diagnose Chinese university EFL learners’ reading difficulties and promote their reading proficiency in a seven-week study. The design included a pre-test, a four-week Enrichment Program, a post-test, and a transfer test. Five learners completed each test both in a non-dynamic (NDA) and DA form. The learners’ individual interactions with a mediator in DA were recorded, transcribed and analyzed via Nvivo. In addition, the learners’ independent performances (IPs) on the NDA and DA, difficulties encountered in the process, the mediator’s prompts provided for the learners, and the learners’ mediated performances (MPs) were all identified and analyzed. Comparisons of the learners’ IPs and MPs across the tests showed that DA contributed to learners’ reading proficiency development, and this progress was evident both in their post-test IPs and MPs.","PeriodicalId":46873,"journal":{"name":"Language Assessment Quarterly","volume":null,"pages":null},"PeriodicalIF":2.9,"publicationDate":"2022-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"59952681","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-10-19DOI: 10.1080/15434303.2022.2128802
L. Gan, Ricky Lam
ABSTRACT Language assessment literacy (LAL), an increasingly prominent research topic, has generated substantial literature in language testing and assessment. Thus far, there seems to be few comprehensive reviews on this research topic. The current scoping study reviewed a total of 81 papers by synthesising LAL studies published from 2008 to 2020. It addressed research questions concerning (1) the overall trend and progress, (2) the research foci and (3) characteristics of implications of LAL research for language teacher education and professional development. The review found that there was an upward trend in LAL studies, which were conducted predominantly in the Asia-Pacific region, Europe and the Middle East, and that most studies employed qualitative over quantitative and mixed-methods designs. An overwhelming majority of studies focused on language teachers, especially EFL teachers, while few were conducted from the perspectives of learners, policy makers, language testers, teacher educators and other stakeholders. The review also discovered that most studies researched stakeholders’ LAL levels, needs and development, overlooking LAL developmental trajectories, localised LAL components, the development of LAL measures, perceptions of LAL and LAL impact. Three characteristics were identified from implications of LAL studies as contributions to language teacher education and professional development. Based on the findings, some guidelines were suggested for future research.
{"title":"A Review on Language Assessment Literacy: Trends, Foci and Contributions","authors":"L. Gan, Ricky Lam","doi":"10.1080/15434303.2022.2128802","DOIUrl":"https://doi.org/10.1080/15434303.2022.2128802","url":null,"abstract":"ABSTRACT Language assessment literacy (LAL), an increasingly prominent research topic, has generated substantial literature in language testing and assessment. Thus far, there seems to be few comprehensive reviews on this research topic. The current scoping study reviewed a total of 81 papers by synthesising LAL studies published from 2008 to 2020. It addressed research questions concerning (1) the overall trend and progress, (2) the research foci and (3) characteristics of implications of LAL research for language teacher education and professional development. The review found that there was an upward trend in LAL studies, which were conducted predominantly in the Asia-Pacific region, Europe and the Middle East, and that most studies employed qualitative over quantitative and mixed-methods designs. An overwhelming majority of studies focused on language teachers, especially EFL teachers, while few were conducted from the perspectives of learners, policy makers, language testers, teacher educators and other stakeholders. The review also discovered that most studies researched stakeholders’ LAL levels, needs and development, overlooking LAL developmental trajectories, localised LAL components, the development of LAL measures, perceptions of LAL and LAL impact. Three characteristics were identified from implications of LAL studies as contributions to language teacher education and professional development. Based on the findings, some guidelines were suggested for future research.","PeriodicalId":46873,"journal":{"name":"Language Assessment Quarterly","volume":null,"pages":null},"PeriodicalIF":2.9,"publicationDate":"2022-10-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46758560","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-09-30DOI: 10.1080/15434303.2022.2130326
Nana Suzumura
ABSTRACT The present study is part of a larger mixed methods project that investigated the speaking section of the Advanced Placement (AP) Japanese Language and Culture Exam. It investigated assumptions for the evaluation inference through a content analysis of test taker responses. Results of the content analysis were integrated with those of a many-facet Rasch analysis of the same speech data. This study found that most information-seeking prompts elicited a good sized ratable speech sample with relevant content, and the rating criteria seemed to fit with the nature of the interaction. Therefore, information-seeking prompts generally provided appropriate evidence of test takers’ ability. In contrast, non-information-seeking prompts such as requests and expressive prompts tended to have issues with eliciting a good sized ratable speech sample with relevant content, and their response expectations realized in the rating criteria did not fit with the nature of the interaction. Thus, non-information-seeking prompts showed greater potential of becoming sources of measurement error with the current test design. This article discusses possible solutions to increase the validity of the evaluation inference. Findings from the present study would be useful for future test development of computer-based L2 tests that aim to assess interpersonal communication skills.
{"title":"Content Analysis of Test Taker Responses on an AP Japanese Computer-Simulated Conversation Test: A Mixed Methods Approach for A Validity Argument","authors":"Nana Suzumura","doi":"10.1080/15434303.2022.2130326","DOIUrl":"https://doi.org/10.1080/15434303.2022.2130326","url":null,"abstract":"ABSTRACT The present study is part of a larger mixed methods project that investigated the speaking section of the Advanced Placement (AP) Japanese Language and Culture Exam. It investigated assumptions for the evaluation inference through a content analysis of test taker responses. Results of the content analysis were integrated with those of a many-facet Rasch analysis of the same speech data. This study found that most information-seeking prompts elicited a good sized ratable speech sample with relevant content, and the rating criteria seemed to fit with the nature of the interaction. Therefore, information-seeking prompts generally provided appropriate evidence of test takers’ ability. In contrast, non-information-seeking prompts such as requests and expressive prompts tended to have issues with eliciting a good sized ratable speech sample with relevant content, and their response expectations realized in the rating criteria did not fit with the nature of the interaction. Thus, non-information-seeking prompts showed greater potential of becoming sources of measurement error with the current test design. This article discusses possible solutions to increase the validity of the evaluation inference. Findings from the present study would be useful for future test development of computer-based L2 tests that aim to assess interpersonal communication skills.","PeriodicalId":46873,"journal":{"name":"Language Assessment Quarterly","volume":null,"pages":null},"PeriodicalIF":2.9,"publicationDate":"2022-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45416040","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-09-06DOI: 10.1080/15434303.2022.2119568
Yo In’nami, Rie Koizumi
{"title":"Another Generation of Fundamental Considerations in Language Assessment: A Festschrift in Honor of Lyle F. Bachman","authors":"Yo In’nami, Rie Koizumi","doi":"10.1080/15434303.2022.2119568","DOIUrl":"https://doi.org/10.1080/15434303.2022.2119568","url":null,"abstract":"","PeriodicalId":46873,"journal":{"name":"Language Assessment Quarterly","volume":null,"pages":null},"PeriodicalIF":2.9,"publicationDate":"2022-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46308771","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}