运用创新科技进行口语评估

IF 2.7 3区 教育学 Q1 EDUCATION & EDUCATIONAL RESEARCH Assessment in Education-Principles Policy & Practice Pub Date : 2021-07-04 DOI:10.1080/0969594X.2021.2004530
Fumiyo Nakatsuhara, Vivien Berry
{"title":"运用创新科技进行口语评估","authors":"Fumiyo Nakatsuhara, Vivien Berry","doi":"10.1080/0969594X.2021.2004530","DOIUrl":null,"url":null,"abstract":"The theme of the very first Special Issue of Assessment in Education: Principles, Policy and Practice (Volume 10, Issue 3, published in 2003) was ‘Assessment for the Digital Age’. The editorial of that Special Issue notes that the aim of the volume was to ‘draw the attention of the international assessment community to a range of potential and actual relationships between digital technologies and assessment’ (McFarlane, 2003, p. 261). Since then, there is no doubt that the role of digital technologies in assessment has evolved even more dynamically than any assessment researchers and practitioners had expected. In particular, exponential advances in technology and the increased availability of high-speed internet in recent years have not only changed the way we communicate orally in social, professional, and educational contexts, but also the ways in which we assess oral language. Revisiting the same theme after almost two decades, but specifically from an oral language assessment perspective, this Special Issue presents conceptual and empirical papers that discuss the opportunities and challenges that the latest innovative affordances offer. The current landscape of oral language assessment can be characterised by numerous examples of the development and use of digital technology (Sawaki, 2022; Xi, 2022). While these innovations have opened the door to types of speaking test tasks which were previously not possible and have provided language test practitioners with more efficient ways of delivering and scoring tests, it should be kept in mind that ‘each of the affordances offered by technology also raises a new set of issues to be tackled’ (Chapelle, 2018). This does not mean that we should be excessively concerned or sceptical about technology-mediated assessments; it simply means that greater transparency is needed. Up-to-date information and appropriate guidance about the use of innovative technology in language testing and, more importantly, what language skills are elicited from test-takers and how they are measured, should be available to test users so that they can both embrace and critically engage with the fast-moving developments in the field (see also Khabbazbashi et al., 2021; Litman et al., 2018). This current Special Issue therefore aims to contribute to and to encourage transparent dialogues by test researchers, practitioners, and users within the international testing community on recent research which investigates both methods of delivery and methods of scoring in technology-mediated oral language assessments. Of the seven articles in this volume, the first three are on the application of technologies for speaking test delivery. In the opening article, Ockey and Neiriz offer a conceptual paper examining five models of technology-delivered assessments of oral communication that have been utilised over the past three decades. Drawing on Bachman and Palmer's (1996) qualities of test usefulness, Ockey and Hirch's (2020) assessment of English as a lingua franca (ELF) framework, and Harding and McNamara's (2018) work on ELF and its relationship to language assessment constructs, Ockey and Neiriz present ASSESSMENT IN EDUCATION: PRINCIPLES, POLICY & PRACTICE 2021, VOL. 28, NO. 4, 343–349 https://doi.org/10.1080/0969594X.2021.2004530","PeriodicalId":51515,"journal":{"name":"Assessment in Education-Principles Policy & Practice","volume":"86 13 1","pages":"343 - 349"},"PeriodicalIF":2.7000,"publicationDate":"2021-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":"{\"title\":\"Use of innovative technology in oral language assessment\",\"authors\":\"Fumiyo Nakatsuhara, Vivien Berry\",\"doi\":\"10.1080/0969594X.2021.2004530\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The theme of the very first Special Issue of Assessment in Education: Principles, Policy and Practice (Volume 10, Issue 3, published in 2003) was ‘Assessment for the Digital Age’. The editorial of that Special Issue notes that the aim of the volume was to ‘draw the attention of the international assessment community to a range of potential and actual relationships between digital technologies and assessment’ (McFarlane, 2003, p. 261). Since then, there is no doubt that the role of digital technologies in assessment has evolved even more dynamically than any assessment researchers and practitioners had expected. In particular, exponential advances in technology and the increased availability of high-speed internet in recent years have not only changed the way we communicate orally in social, professional, and educational contexts, but also the ways in which we assess oral language. Revisiting the same theme after almost two decades, but specifically from an oral language assessment perspective, this Special Issue presents conceptual and empirical papers that discuss the opportunities and challenges that the latest innovative affordances offer. The current landscape of oral language assessment can be characterised by numerous examples of the development and use of digital technology (Sawaki, 2022; Xi, 2022). While these innovations have opened the door to types of speaking test tasks which were previously not possible and have provided language test practitioners with more efficient ways of delivering and scoring tests, it should be kept in mind that ‘each of the affordances offered by technology also raises a new set of issues to be tackled’ (Chapelle, 2018). This does not mean that we should be excessively concerned or sceptical about technology-mediated assessments; it simply means that greater transparency is needed. Up-to-date information and appropriate guidance about the use of innovative technology in language testing and, more importantly, what language skills are elicited from test-takers and how they are measured, should be available to test users so that they can both embrace and critically engage with the fast-moving developments in the field (see also Khabbazbashi et al., 2021; Litman et al., 2018). This current Special Issue therefore aims to contribute to and to encourage transparent dialogues by test researchers, practitioners, and users within the international testing community on recent research which investigates both methods of delivery and methods of scoring in technology-mediated oral language assessments. Of the seven articles in this volume, the first three are on the application of technologies for speaking test delivery. In the opening article, Ockey and Neiriz offer a conceptual paper examining five models of technology-delivered assessments of oral communication that have been utilised over the past three decades. Drawing on Bachman and Palmer's (1996) qualities of test usefulness, Ockey and Hirch's (2020) assessment of English as a lingua franca (ELF) framework, and Harding and McNamara's (2018) work on ELF and its relationship to language assessment constructs, Ockey and Neiriz present ASSESSMENT IN EDUCATION: PRINCIPLES, POLICY & PRACTICE 2021, VOL. 28, NO. 4, 343–349 https://doi.org/10.1080/0969594X.2021.2004530\",\"PeriodicalId\":51515,\"journal\":{\"name\":\"Assessment in Education-Principles Policy & Practice\",\"volume\":\"86 13 1\",\"pages\":\"343 - 349\"},\"PeriodicalIF\":2.7000,\"publicationDate\":\"2021-07-04\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Assessment in Education-Principles Policy & Practice\",\"FirstCategoryId\":\"95\",\"ListUrlMain\":\"https://doi.org/10.1080/0969594X.2021.2004530\",\"RegionNum\":3,\"RegionCategory\":\"教育学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"EDUCATION & EDUCATIONAL RESEARCH\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Assessment in Education-Principles Policy & Practice","FirstCategoryId":"95","ListUrlMain":"https://doi.org/10.1080/0969594X.2021.2004530","RegionNum":3,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"EDUCATION & EDUCATIONAL RESEARCH","Score":null,"Total":0}
引用次数: 2

摘要

《教育评估:原则、政策与实践》第一期特刊(第10卷,第3期,2003年出版)的主题是“数字时代的评估”。该特刊的社论指出,该卷的目的是“提请国际评估界注意数字技术与评估之间的一系列潜在和实际关系”(McFarlane, 2003年,第261页)。从那时起,毫无疑问,数字技术在评估中的作用比任何评估研究人员和实践者所预期的更加动态地发展。特别是,近年来科技的指数级进步和高速互联网的日益普及,不仅改变了我们在社交、专业和教育环境中口头交流的方式,也改变了我们评估口语的方式。在近二十年后重新审视同一主题,但特别从口语评估的角度出发,本期特刊提出了概念和实证论文,讨论了最新创新能力提供的机遇和挑战。口头语言评估的现状可以通过数字技术的发展和使用的许多例子来描述(Sawaki, 2022;虽然这些创新为以前不可能完成的各种口语测试任务打开了大门,并为语言测试从业者提供了更有效的方式来交付和评分测试,但应该记住,“技术提供的每一种支持也提出了一系列需要解决的新问题”(Chapelle, 2018)。这并不意味着我们应该过度关注或怀疑技术介导的评估;这仅仅意味着需要更大的透明度。应该向测试用户提供有关在语言测试中使用创新技术的最新信息和适当指导,更重要的是,从考生那里获得哪些语言技能以及如何衡量这些技能,以便他们能够接受并批判性地参与该领域快速发展的发展(另见Khabbazbashi等人,2021;Litman et al., 2018)。因此,本期特刊旨在促进和鼓励国际考试界的考试研究人员、从业者和用户就最近的研究进行透明的对话,这些研究调查了技术介导的口头语言评估中的交付方法和评分方法。在本卷的七篇文章中,前三篇是关于口语测试交付技术的应用。在开篇文章中,Ockey和Neiriz提供了一篇概念性论文,研究了过去三十年中使用的五种技术提供的口头交流评估模型。借鉴Bachman和Palmer(1996)测试有用性的质量,Ockey和Hirch(2020)对英语作为通用语言(ELF)框架的评估,以及Harding和McNamara(2018)对ELF及其与语言评估结构的关系的研究,Ockey和Neiriz提出了教育评估:原则,政策和实践2021,VOL. 28, NO. 1。4,343 - 349 https://doi.org/10.1080/0969594X.2021.2004530
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Use of innovative technology in oral language assessment
The theme of the very first Special Issue of Assessment in Education: Principles, Policy and Practice (Volume 10, Issue 3, published in 2003) was ‘Assessment for the Digital Age’. The editorial of that Special Issue notes that the aim of the volume was to ‘draw the attention of the international assessment community to a range of potential and actual relationships between digital technologies and assessment’ (McFarlane, 2003, p. 261). Since then, there is no doubt that the role of digital technologies in assessment has evolved even more dynamically than any assessment researchers and practitioners had expected. In particular, exponential advances in technology and the increased availability of high-speed internet in recent years have not only changed the way we communicate orally in social, professional, and educational contexts, but also the ways in which we assess oral language. Revisiting the same theme after almost two decades, but specifically from an oral language assessment perspective, this Special Issue presents conceptual and empirical papers that discuss the opportunities and challenges that the latest innovative affordances offer. The current landscape of oral language assessment can be characterised by numerous examples of the development and use of digital technology (Sawaki, 2022; Xi, 2022). While these innovations have opened the door to types of speaking test tasks which were previously not possible and have provided language test practitioners with more efficient ways of delivering and scoring tests, it should be kept in mind that ‘each of the affordances offered by technology also raises a new set of issues to be tackled’ (Chapelle, 2018). This does not mean that we should be excessively concerned or sceptical about technology-mediated assessments; it simply means that greater transparency is needed. Up-to-date information and appropriate guidance about the use of innovative technology in language testing and, more importantly, what language skills are elicited from test-takers and how they are measured, should be available to test users so that they can both embrace and critically engage with the fast-moving developments in the field (see also Khabbazbashi et al., 2021; Litman et al., 2018). This current Special Issue therefore aims to contribute to and to encourage transparent dialogues by test researchers, practitioners, and users within the international testing community on recent research which investigates both methods of delivery and methods of scoring in technology-mediated oral language assessments. Of the seven articles in this volume, the first three are on the application of technologies for speaking test delivery. In the opening article, Ockey and Neiriz offer a conceptual paper examining five models of technology-delivered assessments of oral communication that have been utilised over the past three decades. Drawing on Bachman and Palmer's (1996) qualities of test usefulness, Ockey and Hirch's (2020) assessment of English as a lingua franca (ELF) framework, and Harding and McNamara's (2018) work on ELF and its relationship to language assessment constructs, Ockey and Neiriz present ASSESSMENT IN EDUCATION: PRINCIPLES, POLICY & PRACTICE 2021, VOL. 28, NO. 4, 343–349 https://doi.org/10.1080/0969594X.2021.2004530
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Assessment in Education-Principles Policy & Practice
Assessment in Education-Principles Policy & Practice EDUCATION & EDUCATIONAL RESEARCH-
CiteScore
5.70
自引率
3.10%
发文量
29
期刊介绍: Recent decades have witnessed significant developments in the field of educational assessment. New approaches to the assessment of student achievement have been complemented by the increasing prominence of educational assessment as a policy issue. In particular, there has been a growth of interest in modes of assessment that promote, as well as measure, standards and quality. These have profound implications for individual learners, institutions and the educational system itself. Assessment in Education provides a focus for scholarly output in the field of assessment. The journal is explicitly international in focus and encourages contributions from a wide range of assessment systems and cultures. The journal''s intention is to explore both commonalities and differences in policy and practice.
期刊最新文献
EduSEL-R – the refined educators’ social-emotional learning questionnaire: expanded scope and improved validity Mapping oral feedback interactions in young pupils’ writing A self-feedback model (SEFEMO): secondary and higher education students’ self-assessment profiles Surprising Insights: rethinking Grades, Exams, and Assessment Cultures The conceptualisation implies the statistical model: implications for measuring domains of teaching quality
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1