Pub Date : 2021-01-01DOI: 10.37213/cjal.2021.31306
Mahmoud Abdi Tabari, Michol Miller
Although several studies have explored the effects of task sequencing on second language (L2) production, there is no established set of criteria to sequence tasks for learners in L2 writing classrooms. This study examined the effect of simple ̶ complex task sequencing manipulated along both resource-directing (± number of elements) and resource-dispersing (± planning time) factors on L2 writing compared to individual task performance using Robinson’s (2010) SSARC model of task sequencing. Upper-intermediate L2 learners (N = 90) were randomly divided into two groups: (1) Participants who performed three writing tasks in a simple–complex sequence, and (2) participants who performed either the simple, less complex, or complex task. Findings revealed that simple-complex task sequencing led to increases in syntactic complexity, accuracy, lexical complexity, and fluency, as compared to individual task performance. Results are discussed in light of the SSARC model, and theoretical and pedagogical implications are provided.
{"title":"Unraveling the Effects of Task Sequencing on the Syntactic Complexity, Accuracy, Lexical Complexity, and Fluency of L2 Written Production","authors":"Mahmoud Abdi Tabari, Michol Miller","doi":"10.37213/cjal.2021.31306","DOIUrl":"https://doi.org/10.37213/cjal.2021.31306","url":null,"abstract":"Although several studies have explored the effects of task sequencing on second language (L2) production, there is no established set of criteria to sequence tasks for learners in L2 writing classrooms. This study examined the effect of simple ̶ complex task sequencing manipulated along both resource-directing (± number of elements) and resource-dispersing (± planning time) factors on L2 writing compared to individual task performance using Robinson’s (2010) SSARC model of task sequencing. Upper-intermediate L2 learners (N = 90) were randomly divided into two groups: (1) Participants who performed three writing tasks in a simple–complex sequence, and (2) participants who performed either the simple, less complex, or complex task. Findings revealed that simple-complex task sequencing led to increases in syntactic complexity, accuracy, lexical complexity, and fluency, as compared to individual task performance. Results are discussed in light of the SSARC model, and theoretical and pedagogical implications are provided.","PeriodicalId":43961,"journal":{"name":"Canadian Journal of Applied Linguistics","volume":"41 1","pages":""},"PeriodicalIF":0.5,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78504974","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-01-01DOI: 10.37213/cjal.2021.31319
Julio Torres, Íñigo Yanguas
Investigating task-based synchronous computer-mediated communication (SCMC) interaction has increasingly received scholarly attention. However, studies have focused on negotiation of meaning and the quantity, focus and resolution of language related episodes (LREs). This study aims to broaden our understanding of the role of audio, video, and text SCMC conditions by additionally examining second language (L2) learners’ levels of engagement during the production of LREs as a result of interactive real-world tasks. We tested 52 dyads of L2 Spanish intermediate learners who completed a decision- making/writing task. Our main analysis revealed that dyads in the audio SCMC condition engaged in more limited LREs vis-à-vis the text SCMC group, and audio SCMC dyads also showed a trend of engaging more in elaborate LREs. The findings imply that interactive SCMC conditions can place differential demands on L2 learners, which has an effect on the ways in which L2 learners address LREs during task-based interaction.
{"title":"Levels of Engagement in Task-based Synchronous Computer Mediated Interaction","authors":"Julio Torres, Íñigo Yanguas","doi":"10.37213/cjal.2021.31319","DOIUrl":"https://doi.org/10.37213/cjal.2021.31319","url":null,"abstract":"Investigating task-based synchronous computer-mediated communication (SCMC) interaction has increasingly received scholarly attention. However, studies have focused on negotiation of meaning and the quantity, focus and resolution of language related episodes (LREs). This study aims to broaden our understanding of the role of audio, video, and text SCMC conditions by additionally examining second language (L2) learners’ levels of engagement during the production of LREs as a result of interactive real-world tasks. We tested 52 dyads of L2 Spanish intermediate learners who completed a decision- making/writing task. Our main analysis revealed that dyads in the audio SCMC condition engaged in more limited LREs vis-à-vis the text SCMC group, and audio SCMC dyads also showed a trend of engaging more in elaborate LREs. The findings imply that interactive SCMC conditions can place differential demands on L2 learners, which has an effect on the ways in which L2 learners address LREs during task-based interaction.","PeriodicalId":43961,"journal":{"name":"Canadian Journal of Applied Linguistics","volume":"519 1","pages":""},"PeriodicalIF":0.5,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77182886","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-01-01DOI: 10.37213/cjal.2021.31365
M. Zuniga, Caroline Payant
The present study draws on Flow Theory to examine the relationship between task repetition and the quality of learners’ subjective experience during task execution. Flow is defined as a positive experiential state characterized by intense focus and involvement in meaningful and challenging, but doable tasks, which has been associated with enhanced self-confidence and task performance (Csikszentmihalyi, 2008). While research shows that certain task characteristics interact differentially with the quality of flow experiences, no research has specifically examined such interaction with task repetition. Participants (n=24) were randomly assigned to a Task Repetition or a Procedural Repetition group. All participants first completed a two-way decision-making gap task in both the oral and written modalities and either repeated the identical task or a comparable task one week later. Data were collected with a flow perception questionnaire, completed immediately following each task. Results show that repetition positively influenced learners’ flow experience, but that modality was an important mediating factor.
{"title":"In Flow with Task Repetition During Collaborative Oral and Writing Tasks","authors":"M. Zuniga, Caroline Payant","doi":"10.37213/cjal.2021.31365","DOIUrl":"https://doi.org/10.37213/cjal.2021.31365","url":null,"abstract":"The present study draws on Flow Theory to examine the relationship between task repetition and the quality of learners’ subjective experience during task execution. Flow is defined as a positive experiential state characterized by intense focus and involvement in meaningful and challenging, but doable tasks, which has been associated with enhanced self-confidence and task performance (Csikszentmihalyi, 2008). While research shows that certain task characteristics interact differentially with the quality of flow experiences, no research has specifically examined such interaction with task repetition. Participants (n=24) were randomly assigned to a Task Repetition or a Procedural Repetition group. All participants first completed a two-way decision-making gap task in both the oral and written modalities and either repeated the identical task or a comparable task one week later. Data were collected with a flow perception questionnaire, completed immediately following each task. Results show that repetition positively influenced learners’ flow experience, but that modality was an important mediating factor.","PeriodicalId":43961,"journal":{"name":"Canadian Journal of Applied Linguistics","volume":"111 1","pages":""},"PeriodicalIF":0.5,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85202111","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-10-16DOI: 10.37213/cjal.2020.30435
S. Baldwin, Liying Cheng
This qualitative validation study examines sixteen Internationally Educated Nurses’ (IENs’) accounts of the Canadian English Language Benchmark Assessment for Nurses (CELBAN) at two testing centres (Toronto and Hamilton). This study adopts both focus groups and one-on-one interviews to investigate the inferences drawn from the test, and its consequences. Focus groups and interviews were conducted using an adapted interview guide utilized in the TOEFL iBT investigation of test-taker accounts of construct representation and construct irrelevant variance (DeLuca et al., 2013). While construct representation describes the degree of authenticity in the presentation of Canadian English language nursing tasks, construct irrelevant variance refers to potential factors impacting the test-taking experience which might contribute to a score variance that was not reflective of test-taker knowledge of the testing constructs (Messick, 1989, 1991, 1996). In this study, test-taker accounts of construct representation and construct irrelevant variance constituted the data which were coded and analyzed abductively via the sensitizing concepts derived from DeLuca et al., and Cheng and DeLuca (2011) on examining test-takers’ experience and their contribution to validity. Seven themes emerged, answering four research questions: How do IENs characterize their test experience? How do IENs describe the assessment constructs? What, if any, sources of Construct Irrelevant Variance (CIV) do IENs describe? Do IENs feel the language tasks are authentic? Overall, participants reported positive experiences with the CELBAN, while identifying some possible sources of CIV. Given the CELBAN’s widespread use for high-stakes decisions (a component of nursing certification and licensure), further research of IEN-test-taker responses to construct representation and construct irrelevant variance will remain critical to our understanding of the role of language competency testing for IENs.
{"title":"Internationally Educated Nurses and the Canadian English Language Benchmark Assessment for Nurses: A Qualitative Test Validation Study of Test-Taker Accounts","authors":"S. Baldwin, Liying Cheng","doi":"10.37213/cjal.2020.30435","DOIUrl":"https://doi.org/10.37213/cjal.2020.30435","url":null,"abstract":"This qualitative validation study examines sixteen Internationally Educated Nurses’ (IENs’) accounts of the Canadian English Language Benchmark Assessment for Nurses (CELBAN) at two testing centres (Toronto and Hamilton). This study adopts both focus groups and one-on-one interviews to investigate the inferences drawn from the test, and its consequences. Focus groups and interviews were conducted using an adapted interview guide utilized in the TOEFL iBT investigation of test-taker accounts of construct representation and construct irrelevant variance (DeLuca et al., 2013). While construct representation describes the degree of authenticity in the presentation of Canadian English language nursing tasks, construct irrelevant variance refers to potential factors impacting the test-taking experience which might contribute to a score variance that was not reflective of test-taker knowledge of the testing constructs (Messick, 1989, 1991, 1996). In this study, test-taker accounts of construct representation and construct irrelevant variance constituted the data which were coded and analyzed abductively via the sensitizing concepts derived from DeLuca et al., and Cheng and DeLuca (2011) on examining test-takers’ experience and their contribution to validity. Seven themes emerged, answering four research questions: How do IENs characterize their test experience? How do IENs describe the assessment constructs? What, if any, sources of Construct Irrelevant Variance (CIV) do IENs describe? Do IENs feel the language tasks are authentic? Overall, participants reported positive experiences with the CELBAN, while identifying some possible sources of CIV. Given the CELBAN’s widespread use for high-stakes decisions (a component of nursing certification and licensure), further research of IEN-test-taker responses to construct representation and construct irrelevant variance will remain critical to our understanding of the role of language competency testing for IENs. ","PeriodicalId":43961,"journal":{"name":"Canadian Journal of Applied Linguistics","volume":"31 1","pages":""},"PeriodicalIF":0.5,"publicationDate":"2020-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80987400","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-10-16DOI: 10.37213/cjal.2020.31347
S. Saif, Samira ElAtia
{"title":"Introduction to Special Issue","authors":"S. Saif, Samira ElAtia","doi":"10.37213/cjal.2020.31347","DOIUrl":"https://doi.org/10.37213/cjal.2020.31347","url":null,"abstract":"","PeriodicalId":43961,"journal":{"name":"Canadian Journal of Applied Linguistics","volume":"22 1","pages":""},"PeriodicalIF":0.5,"publicationDate":"2020-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81459050","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-10-16DOI: 10.37213/cjal.2020.30434
M. Bournot-Trites, L. Friesen, C. Ruest, B. Zumbo
To ensure quality of education, a language framework should be the foundation on which second language curricula are developed. In 2010, the Council of Ministers of Education, Canada (CMEC), as suggested by Vandergrift (2006a, 2006b), recommended the use of the Common European Framework of Reference (CEFR) in the K-12 Canadian school context and presented several considerations for adaptation before it should be adopted and used. Although the CEFR is partially used across Canada, few of the CMEC’s considerations have been met to date. Given this state of affairs, we suggest the made-in-Canada, Canadian Language Benchmarks and les Niveaux de competence linguistique canadiens (CLB/NCLC) as an alternative. We argue that the CLB/NCLC, profoundly revised in 2012, best embrace the Canadian context and, using Vandergrift’s criteria for a valid language framework, that CLB/NCLC are now superior to the CEFR in many aspects.
{"title":"A Made-in-Canada Second Language Framework for K-12 Education: Another Case Where No Prophet is Accepted in their Own Land","authors":"M. Bournot-Trites, L. Friesen, C. Ruest, B. Zumbo","doi":"10.37213/cjal.2020.30434","DOIUrl":"https://doi.org/10.37213/cjal.2020.30434","url":null,"abstract":"To ensure quality of education, a language framework should be the foundation on which second language curricula are developed. In 2010, the Council of Ministers of Education, Canada (CMEC), as suggested by Vandergrift (2006a, 2006b), recommended the use of the Common European Framework of Reference (CEFR) in the K-12 Canadian school context and presented several considerations for adaptation before it should be adopted and used. Although the CEFR is partially used across Canada, few of the CMEC’s considerations have been met to date. Given this state of affairs, we suggest the made-in-Canada, Canadian Language Benchmarks and les Niveaux de competence linguistique canadiens (CLB/NCLC) as an alternative. We argue that the CLB/NCLC, profoundly revised in 2012, best embrace the Canadian context and, using Vandergrift’s criteria for a valid language framework, that CLB/NCLC are now superior to the CEFR in many aspects.","PeriodicalId":43961,"journal":{"name":"Canadian Journal of Applied Linguistics","volume":"5 1","pages":"141-167"},"PeriodicalIF":0.5,"publicationDate":"2020-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77035188","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-10-16DOI: 10.37213/cjal.2020.30437
Vincent V. Folny
En 2015, le CIEP a mené une étude afin de pouvoir adosser les productions écrites et orales du Test de connaissance de français (TCF) aux NCLC et d’établir une correspondance avec les niveaux du CECRL. In fine, Il s’agissait d’assurer aux candidats à l’immigration au Canada une bonne interprétation de leur niveau de compétence et de pouvoir expliquer les procédures mise en place pour l’interprétation des scores. Pour assurer l’adossement des épreuves d’expression écrites et orales du TCF aux niveaux NCLC, plusieurs procédures et études ont été mises en place : utilisation des niveaux CECR attribués à une sélection de productions au cours des corrections du TCF, séminaire organisé avec des panélistes pour attribuer des niveaux NCLC à ces mêmes productions, analyses psychométriques pour le calibrage de la sélection de productions, évaluation du lien entre les niveaux attribués avec les deux échelles (NCLC et CECR) afin de vérifier la convergence des résultats, analyses qualitatives des descripteurs NCLC et CECR et mise en correspondance.
{"title":"Adossement des épreuves d’expression orale et écrite du Test de connaissance du français (TCF) sur les Niveaux de compétences linguistiques canadiens (NCLC) et correspondance avec les niveaux du Cadre européen commun de référence pour les langues (CECRL)","authors":"Vincent V. Folny","doi":"10.37213/cjal.2020.30437","DOIUrl":"https://doi.org/10.37213/cjal.2020.30437","url":null,"abstract":"En 2015, le CIEP a mené une étude afin de pouvoir adosser les productions écrites et orales du Test de connaissance de français (TCF) aux NCLC et d’établir une correspondance avec les niveaux du CECRL. In fine, Il s’agissait d’assurer aux candidats à l’immigration au Canada une bonne interprétation de leur niveau de compétence et de pouvoir expliquer les procédures mise en place pour l’interprétation des scores. Pour assurer l’adossement des épreuves d’expression écrites et orales du TCF aux niveaux NCLC, plusieurs procédures et études ont été mises en place : utilisation des niveaux CECR attribués à une sélection de productions au cours des corrections du TCF, séminaire organisé avec des panélistes pour attribuer des niveaux NCLC à ces mêmes productions, analyses psychométriques pour le calibrage de la sélection de productions, évaluation du lien entre les niveaux attribués avec les deux échelles (NCLC et CECR) afin de vérifier la convergence des résultats, analyses qualitatives des descripteurs NCLC et CECR et mise en correspondance.","PeriodicalId":43961,"journal":{"name":"Canadian Journal of Applied Linguistics","volume":"73 1","pages":""},"PeriodicalIF":0.5,"publicationDate":"2020-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82255962","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-10-16DOI: 10.37213/cjal.2020.30649
Michelle Y. Chen, Jennifer J. Flasko
Seeking evidence to support content validity is essential to test validation. This is especially the case in contexts where test scores are interpreted in relation to external proficiency standards and where new test content is constantly being produced to meet test administration and security demands. In this paper, we describe a modified scale- anchoring approach to assessing the alignment between the Canadian English Language Proficiency Index Program (CELPIP) test and the Canadian Language Benchmarks (CLB), the proficiency framework to which the test scores are linked. We discuss how proficiency frameworks such as the CLB can be used to support the content validation of large-scale standardized tests through an evaluation of the alignment between the test content and the performance standards. By sharing both the positive implications and challenges of working with the CLB in high-stakes language test validation, we hope to help raise the profile of this national language framework among scholars and practitioners.
{"title":"Investigating the Alignment Between the CELPIP-General Reading Test and the Canadian Language Benchmarks: A Content Validation Study","authors":"Michelle Y. Chen, Jennifer J. Flasko","doi":"10.37213/cjal.2020.30649","DOIUrl":"https://doi.org/10.37213/cjal.2020.30649","url":null,"abstract":"\u0000\u0000\u0000Seeking evidence to support content validity is essential to test validation. This is especially the case in contexts where test scores are interpreted in relation to external proficiency standards and where new test content is constantly being produced to meet test administration and security demands. In this paper, we describe a modified scale- anchoring approach to assessing the alignment between the Canadian English Language Proficiency Index Program (CELPIP) test and the Canadian Language Benchmarks (CLB), the proficiency framework to which the test scores are linked. We discuss how proficiency frameworks such as the CLB can be used to support the content validation of large-scale standardized tests through an evaluation of the alignment between the test content and the performance standards. By sharing both the positive implications and challenges of working with the CLB in high-stakes language test validation, we hope to help raise the profile of this national language framework among scholars and practitioners.\u0000\u0000\u0000","PeriodicalId":43961,"journal":{"name":"Canadian Journal of Applied Linguistics","volume":"59 1","pages":""},"PeriodicalIF":0.5,"publicationDate":"2020-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74067455","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-10-16DOI: 10.37213/cjal.2020.30458
Yuliya Desyatova
While the Canadian Language Benchmarks (CLB) document has been a milestone in supporting the teaching of English as an additional language to adults in Canada, few studies examined practitioners’ experiences with the language standard. The expectation of ongoing use of the CLB by teachers in the Language Instruction for Newcomers to Canada (LINC) program became a rigid requirement with the implementation of portfolio-based language assessment (PBLA). However, the CLB-related literature has been mostly conceptual and aspirational, while practitioners’ voices have been on the margins of research and policy making. This article examines teacher comments on the CLB, as collected during a large mixed-methods exploratory project on PBLA implementation (Desyatova, 2018, 2020). While some practitioners appreciated the standard and its impact, the majority of comments reflected comprehensibility and interpretation challenges, experienced by both teachers and learners. These challenges were further aggravated by the pressures of PBLA as a mandatory assessment protocol.
{"title":"Comments from the Chalkface Margins: Teachers’ Experiences with a Language Standard, Canadian Language Benchmarks","authors":"Yuliya Desyatova","doi":"10.37213/cjal.2020.30458","DOIUrl":"https://doi.org/10.37213/cjal.2020.30458","url":null,"abstract":"While the Canadian Language Benchmarks (CLB) document has been a milestone in supporting the teaching of English as an additional language to adults in Canada, few studies examined practitioners’ experiences with the language standard. The expectation of ongoing use of the CLB by teachers in the Language Instruction for Newcomers to Canada (LINC) program became a rigid requirement with the implementation of portfolio-based language assessment (PBLA). However, the CLB-related literature has been mostly conceptual and aspirational, while practitioners’ voices have been on the margins of research and policy making. This article examines teacher comments on the CLB, as collected during a large mixed-methods exploratory project on PBLA implementation (Desyatova, 2018, 2020). While some practitioners appreciated the standard and its impact, the majority of comments reflected comprehensibility and interpretation challenges, experienced by both teachers and learners. These challenges were further aggravated by the pressures of PBLA as a mandatory assessment protocol. ","PeriodicalId":43961,"journal":{"name":"Canadian Journal of Applied Linguistics","volume":"94 1","pages":""},"PeriodicalIF":0.5,"publicationDate":"2020-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74781733","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}