This paper reflects on the findings of a small-scale and exploratory study which attempted to explore whether and how learning-oriented assessment opportunities might be revealed in, or inserted into formal speaking tests, order to provide language assessment literacy opportunities for language teachers teaching in test preparation courses as well as teachers training to become speaking test raters. Hamp-Lyons and Green (2014) closely studied a set of authentic speaking test video samples from the Cambridge: First (First Certificate of English) speaking test, in order to learn whether, and where, learning-oriented behaviours could be encouraged or added to interlocutors’ behaviours, without disrupting the required reliability and validity of the test. We paid particular attention to some basic components of effective interaction that we would want an examiner or interlocutor to exhibit if they seek to encourage interactive responses from test candidates: body language (in particular eye contact; intonation, pacing and pausing); management of turn-taking and elicitation of candidate-candidate interaction. We call this shift in focus to view tests as learning opportunities learning-oriented language assessment (LOLA).
{"title":"Language assessment literacy for language learning-oriented assessment","authors":"L. Hamp-Lyons","doi":"10.58379/lixl1198","DOIUrl":"https://doi.org/10.58379/lixl1198","url":null,"abstract":"This paper reflects on the findings of a small-scale and exploratory study which attempted to explore whether and how learning-oriented assessment opportunities might be revealed in, or inserted into formal speaking tests, order to provide language assessment literacy opportunities for language teachers teaching in test preparation courses as well as teachers training to become speaking test raters. Hamp-Lyons and Green (2014) closely studied a set of authentic speaking test video samples from the Cambridge: First (First Certificate of English) speaking test, in order to learn whether, and where, learning-oriented behaviours could be encouraged or added to interlocutors’ behaviours, without disrupting the required reliability and validity of the test. We paid particular attention to some basic components of effective interaction that we would want an examiner or interlocutor to exhibit if they seek to encourage interactive responses from test candidates: body language (in particular eye contact; intonation, pacing and pausing); management of turn-taking and elicitation of candidate-candidate interaction. We call this shift in focus to view tests as learning opportunities learning-oriented language assessment (LOLA).","PeriodicalId":29650,"journal":{"name":"Studies in Language Assessment","volume":"83 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2017-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91307941","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The teaching and learning of (foreign) languages in the context of globalisation is at a juncture in Australian education where fundamental changes in the field present distinctive challenges for teachers. These changes necessitate a reconceptualisation of the constructs and alter the very nature of assessment: the conceptualisation of what it is that is to be assessed, the processes used to elicit evidence of student learning and the frames of reference that provide the context for making judgments about students’ language learning. In this paper I discuss the shift from communicative language teaching towards an intercultural orientation in language learning. Based on data from a three-year-study that investigated teacher assessment of language learning from an intercultural perspective in a range of specific languages in the K–12 context, I discuss the nature of the challenge for teachers as they develop their assessment practices. This challenge is characterised as both conceptual and interpretive. I conclude by drawing implications for developing the assessment literacy of teachers of languages.
{"title":"Developing assessment literacy of teachers of languages: A conceptual and interpretive challenge","authors":"A. Scarino","doi":"10.58379/rxyj7968","DOIUrl":"https://doi.org/10.58379/rxyj7968","url":null,"abstract":"The teaching and learning of (foreign) languages in the context of globalisation is at a juncture in Australian education where fundamental changes in the field present distinctive challenges for teachers. These changes necessitate a reconceptualisation of the constructs and alter the very nature of assessment: the conceptualisation of what it is that is to be assessed, the processes used to elicit evidence of student learning and the frames of reference that provide the context for making judgments about students’ language learning. In this paper I discuss the shift from communicative language teaching towards an intercultural orientation in language learning. Based on data from a three-year-study that investigated teacher assessment of language learning from an intercultural perspective in a range of specific languages in the K–12 context, I discuss the nature of the challenge for teachers as they develop their assessment practices. This challenge is characterised as both conceptual and interpretive. I conclude by drawing implications for developing the assessment literacy of teachers of languages.","PeriodicalId":29650,"journal":{"name":"Studies in Language Assessment","volume":"23 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2017-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76104014","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Read, J. (Ed) Post-admission Language Assessment of University Students","authors":"M. Czajkowski","doi":"10.58379/lhrm6354","DOIUrl":"https://doi.org/10.58379/lhrm6354","url":null,"abstract":"<jats:p>n/a</jats:p>","PeriodicalId":29650,"journal":{"name":"Studies in Language Assessment","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2016-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90302343","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Guided by the theory of interpretive validity argument, this study investigated the plausibility and accuracy of five sets of warrants which were deemed crucial to the validity of a self-assessment (SA) scale designed and used in a local EFL context. Methodologically, this study utilized both the Rasch measurement theory and structural equation modeling (SEM) to examine the five warrants and their respective rebuttals. Results from Rasch analysis indicated that the scale could reliably distinguish students at different proficiency levels. Among the 26 can-do statements in the SA scale, only one statement failed to fit the expectations of the Rasch model. Furthermore, each category was found to function as intended, though the first category was somewhat underused. Confirmatory factor analysis of the SA data supported the tenability of the Higher-Order Factor model which is consistent with the current view of L2 ability. Structural regression analysis revealed that the association between students’ self-assessments and their scores on a standardized proficiency test was moderately strong. The multiple strands of evidence generated by various quantitative analyses of the SA data generally supported the validity of the SA scale. Future research, however, is warranted to examine other inferences in the validity argument structure, particularly in relation to the utility of the SA scale in English teaching and learning.
{"title":"The Construct and Predictive Validity of a Self-Assessment Scale","authors":"Jason Fan","doi":"10.58379/jdlz9308","DOIUrl":"https://doi.org/10.58379/jdlz9308","url":null,"abstract":"Guided by the theory of interpretive validity argument, this study investigated the plausibility and accuracy of five sets of warrants which were deemed crucial to the validity of a self-assessment (SA) scale designed and used in a local EFL context. Methodologically, this study utilized both the Rasch measurement theory and structural equation modeling (SEM) to examine the five warrants and their respective rebuttals. Results from Rasch analysis indicated that the scale could reliably distinguish students at different proficiency levels. Among the 26 can-do statements in the SA scale, only one statement failed to fit the expectations of the Rasch model. Furthermore, each category was found to function as intended, though the first category was somewhat underused. Confirmatory factor analysis of the SA data supported the tenability of the Higher-Order Factor model which is consistent with the current view of L2 ability. Structural regression analysis revealed that the association between students’ self-assessments and their scores on a standardized proficiency test was moderately strong. The multiple strands of evidence generated by various quantitative analyses of the SA data generally supported the validity of the SA scale. Future research, however, is warranted to examine other inferences in the validity argument structure, particularly in relation to the utility of the SA scale in English teaching and learning.","PeriodicalId":29650,"journal":{"name":"Studies in Language Assessment","volume":"45 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2016-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86852035","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The increasing number of international undergraduates enrolled in English-medium universities creates challenges for an existing EAP (English for Academic Purposes) placement test, especially when the validity of the existing test is not examined with incoming undergraduate examinees. As an attempt to address this issue from a measurement perspective, this study tested measurement invariance in a listening placement test across undergraduate and graduate examinees to investigate whether the test measures the same trait dimension across qualitatively distinct groups of examinees. Using 590 students’ listening placement test results, the best fitting baseline model was identified first and then competing models with a series of increasingly restrictive hypotheses were compared to test measurement and structural invariance of the target test across the undergraduate and graduate examinees. Measurement invariance across the undergraduate and graduate examinees was held, indicating invariant factors, equal factor loadings for each item, and error variance. However, structural invariance was not completely established especially for the factor means across two groups, which may suggest different score interpretations and uses depending on examinees’ academic status.
{"title":"Testing measurement invariance of an EAP listening placement test across undergraduate and graduate Students","authors":"S. Youn, Seongah Im","doi":"10.58379/tznp6615","DOIUrl":"https://doi.org/10.58379/tznp6615","url":null,"abstract":"The increasing number of international undergraduates enrolled in English-medium universities creates challenges for an existing EAP (English for Academic Purposes) placement test, especially when the validity of the existing test is not examined with incoming undergraduate examinees. As an attempt to address this issue from a measurement perspective, this study tested measurement invariance in a listening placement test across undergraduate and graduate examinees to investigate whether the test measures the same trait dimension across qualitatively distinct groups of examinees. Using 590 students’ listening placement test results, the best fitting baseline model was identified first and then competing models with a series of increasingly restrictive hypotheses were compared to test measurement and structural invariance of the target test across the undergraduate and graduate examinees. Measurement invariance across the undergraduate and graduate examinees was held, indicating invariant factors, equal factor loadings for each item, and error variance. However, structural invariance was not completely established especially for the factor means across two groups, which may suggest different score interpretations and uses depending on examinees’ academic status.","PeriodicalId":29650,"journal":{"name":"Studies in Language Assessment","volume":"23 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2016-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90205219","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Academic English programs are popular pathways into English-medium university courses across the world. A typical program design hinges on an established university entrance standard, e.g. IELTS 6.5, and extrapolates the timing and structure of the pathway stages in relation to the test standard. The general principle is that the course assessments substitute for the test standard so that successful completion of the course is considered equivalent to achieving the minimum test standard for university entrance. This study reports on an evaluation of such course assessments at a major Australian university. The evaluation undertook to determine the appropriateness of the exit standard in relation to an independent measure of academic English ability. It also explored the suitability of the course final assessments used to produce measures in relation to that standard: by investigating the robustness of the processes and instruments and their appropriateness in relation to the course and the target academic domain. The evaluation was revealing about the difficult relationship between best practice in achievement testing in academic English pathway programs and external proficiency test standards. Using the sociological concept of ‘boundary object’ worlds (Star & Griesemer, 1989), we suggest that program evaluations that arise from a specific institutional concern for meeting adequate language standards can be informative about interactions between assessments in use.
{"title":"Negotiating the boundary between achievement and proficiency: An evaluation of the exit standard of an academic English pathway program","authors":"Susy Macqueen, S. O'Hagan, B. Hughes","doi":"10.58379/krdu8216","DOIUrl":"https://doi.org/10.58379/krdu8216","url":null,"abstract":"Academic English programs are popular pathways into English-medium university courses across the world. A typical program design hinges on an established university entrance standard, e.g. IELTS 6.5, and extrapolates the timing and structure of the pathway stages in relation to the test standard. The general principle is that the course assessments substitute for the test standard so that successful completion of the course is considered equivalent to achieving the minimum test standard for university entrance. This study reports on an evaluation of such course assessments at a major Australian university. The evaluation undertook to determine the appropriateness of the exit standard in relation to an independent measure of academic English ability. It also explored the suitability of the course final assessments used to produce measures in relation to that standard: by investigating the robustness of the processes and instruments and their appropriateness in relation to the course and the target academic domain. The evaluation was revealing about the difficult relationship between best practice in achievement testing in academic English pathway programs and external proficiency test standards. Using the sociological concept of ‘boundary object’ worlds (Star & Griesemer, 1989), we suggest that program evaluations that arise from a specific institutional concern for meeting adequate language standards can be informative about interactions between assessments in use.","PeriodicalId":29650,"journal":{"name":"Studies in Language Assessment","volume":"42 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2016-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82617223","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper outlines the reform of the national school-leaving exam in Austria from a teacher-designed exam to a professionally developed and standardized exam for the foreign languages English, French, Italian and Spanish, evaluating the unexpected challenges met along the way from the project team’s perspective. It describes the assessment context prior to the reform to illustrate the perceived need for change and outlines the steps taken to address this need. The paper explains how key features of the exam reform project were implemented step-by-step to raise awareness with stakeholders and convince authorities to support and adopt the new approach. Reporting on the various stages of the project, it evaluates its success in introducing one standardized CEFR-based test for all students nationwide. The paper in particular highlights the unexpected political, technical and practical challenges faced, and how these were addressed, overcome or endured and with what consequences. The paper concludes with reflections and recommendations on how comparable test development projects may be approached.
{"title":"Evaluating the achievements and challenges in reforming a national language exam: The reform team’s perspective","authors":"C. Spöttl, B. Kremmel, F. Holzknecht, J. Alderson","doi":"10.58379/qfjy5510","DOIUrl":"https://doi.org/10.58379/qfjy5510","url":null,"abstract":"This paper outlines the reform of the national school-leaving exam in Austria from a teacher-designed exam to a professionally developed and standardized exam for the foreign languages English, French, Italian and Spanish, evaluating the unexpected challenges met along the way from the project team’s perspective. It describes the assessment context prior to the reform to illustrate the perceived need for change and outlines the steps taken to address this need. The paper explains how key features of the exam reform project were implemented step-by-step to raise awareness with stakeholders and convince authorities to support and adopt the new approach. Reporting on the various stages of the project, it evaluates its success in introducing one standardized CEFR-based test for all students nationwide. The paper in particular highlights the unexpected political, technical and practical challenges faced, and how these were addressed, overcome or endured and with what consequences. The paper concludes with reflections and recommendations on how comparable test development projects may be approached.","PeriodicalId":29650,"journal":{"name":"Studies in Language Assessment","volume":"52 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2016-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90673109","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Introduction to Special Issue","authors":"C. Elder","doi":"10.58379/ydms1439","DOIUrl":"https://doi.org/10.58379/ydms1439","url":null,"abstract":"<jats:p>n/a</jats:p>","PeriodicalId":29650,"journal":{"name":"Studies in Language Assessment","volume":"300 12","pages":""},"PeriodicalIF":0.0,"publicationDate":"2016-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72391740","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this era of public accountability, defining levels of performance for assessment purposes has become a major consideration for educational institutions. It was certainly true of the development by the national qualifications authority of the New Zealand Certificates of English Language (NZCEL), a five-level sequence of awards for learners of English as an additional language at the post-secondary level implemented in 2014. The process of defining the five levels involved benchmarking of standards both nationally and internationally, particularly in relation to the Common European Framework of Reference (CEFR). This paper presents an outsider’s view of the definition of standards for the NZCEL, based on information provided by key participants at the national and local levels. The process has involved taking account of not only the CEFR but also the New Zealand Qualifications Framework (NZQF) and the band score levels of the International English Language Testing System (IELTS). The paper focuses in particular on the issue of establishing the equivalence of NZCEL 4 (Academic) to other recognised measures of English language proficiency as an admission requirement to undergraduate study for international students. The benchmarking process was both multi-faceted and open-ended, in that several issues remain unresolved as implementation of programmes leading to the NZCEL 4 (Academic) has proceeded. At the time of writing, the NZCEL qualifications are scheduled for a formal review and the paper concludes with a discussion of the issues that ideally should be addressed in evaluating the qualification to date.
{"title":"Defining assessment standards for a new national tertiary-level qualification","authors":"J. Read","doi":"10.58379/cthx3423","DOIUrl":"https://doi.org/10.58379/cthx3423","url":null,"abstract":"In this era of public accountability, defining levels of performance for assessment purposes has become a major consideration for educational institutions. It was certainly true of the development by the national qualifications authority of the New Zealand Certificates of English Language (NZCEL), a five-level sequence of awards for learners of English as an additional language at the post-secondary level implemented in 2014. The process of defining the five levels involved benchmarking of standards both nationally and internationally, particularly in relation to the Common European Framework of Reference (CEFR). This paper presents an outsider’s view of the definition of standards for the NZCEL, based on information provided by key participants at the national and local levels. The process has involved taking account of not only the CEFR but also the New Zealand Qualifications Framework (NZQF) and the band score levels of the International English Language Testing System (IELTS). The paper focuses in particular on the issue of establishing the equivalence of NZCEL 4 (Academic) to other recognised measures of English language proficiency as an admission requirement to undergraduate study for international students. The benchmarking process was both multi-faceted and open-ended, in that several issues remain unresolved as implementation of programmes leading to the NZCEL 4 (Academic) has proceeded. At the time of writing, the NZCEL qualifications are scheduled for a formal review and the paper concludes with a discussion of the issues that ideally should be addressed in evaluating the qualification to date.","PeriodicalId":29650,"journal":{"name":"Studies in Language Assessment","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2016-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74876476","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Kunnan, A. J. (Ed.) Talking about language assessment: The LAQ interviews","authors":"Paul Gruba","doi":"10.58379/jjli5881","DOIUrl":"https://doi.org/10.58379/jjli5881","url":null,"abstract":"<jats:p>n/a</jats:p>","PeriodicalId":29650,"journal":{"name":"Studies in Language Assessment","volume":"16 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2016-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80469845","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}