Angxuan Chen, Yuyue Zhang, Jiyou Jia, Min Liang, Yingying Cha, Cher Ping Lim
{"title":"人工智能辅助语言学习评估的系统回顾和荟萃分析:设计、实施和效果","authors":"Angxuan Chen, Yuyue Zhang, Jiyou Jia, Min Liang, Yingying Cha, Cher Ping Lim","doi":"10.1111/jcal.13064","DOIUrl":null,"url":null,"abstract":"<div>\n \n \n <section>\n \n <h3> Background</h3>\n \n <p>Language assessment plays a pivotal role in language education, serving as a bridge between students' understanding and educators' instructional approaches. Recently, advancements in Artificial Intelligence (AI) technologies have introduced transformative possibilities for automating and personalising language assessments.</p>\n </section>\n \n <section>\n \n <h3> Objectives</h3>\n \n <p>This article aims to explore the design and implementation of AI-enabled assessment tools in language education, filling the research gaps regarding the impact of assessment type, intervention duration, education level, and first language learner/second language learner (L1/L2) on the effectiveness of AI-enabled assessment tools in enhancing students' language learning outcome.</p>\n </section>\n \n <section>\n \n <h3> Methods</h3>\n \n <p>This study conducted a systematic review and meta-analysis to examine 25 empirical studies from January 2012 to March 2024 from six databases (including EBSCO, ProQuest, Scopus, Web of Science, ACM Digital Library and CNKI).</p>\n </section>\n \n <section>\n \n <h3> Results</h3>\n \n <p>The predominant design in AI-driven assessment tools is the structural AI architecture. These tools are most frequently deployed in classroom settings for upper primary students within a short duration. A subsequent meta-analysis showed a medium overall effect size (Hedges's <i>g</i> = 0.390, <i>p</i> < 0.001) for the application of AI-enabled assessment tools in enhancing students' language learning, underscoring their significant impact on language learning outcomes. This evidence robustly supports the practical utility of these tools in educational contexts.</p>\n </section>\n \n <section>\n \n <h3> Conclusions</h3>\n \n <p>The analysis of several moderator variables (i.e., assessment type, intervention duration, educational level and L1/L2 learners) and potential impacts on language learning performance indicates that AI-enabled assessment could be more useful in language education with a proper implementation design. Future research could investigate diverse instructional designs for integrating AI-based assessment tools in language education.</p>\n </section>\n </div>","PeriodicalId":48071,"journal":{"name":"Journal of Computer Assisted Learning","volume":"41 1","pages":""},"PeriodicalIF":5.1000,"publicationDate":"2024-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"A systematic review and meta-analysis of AI-enabled assessment in language learning: Design, implementation, and effectiveness\",\"authors\":\"Angxuan Chen, Yuyue Zhang, Jiyou Jia, Min Liang, Yingying Cha, Cher Ping Lim\",\"doi\":\"10.1111/jcal.13064\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div>\\n \\n \\n <section>\\n \\n <h3> Background</h3>\\n \\n <p>Language assessment plays a pivotal role in language education, serving as a bridge between students' understanding and educators' instructional approaches. Recently, advancements in Artificial Intelligence (AI) technologies have introduced transformative possibilities for automating and personalising language assessments.</p>\\n </section>\\n \\n <section>\\n \\n <h3> Objectives</h3>\\n \\n <p>This article aims to explore the design and implementation of AI-enabled assessment tools in language education, filling the research gaps regarding the impact of assessment type, intervention duration, education level, and first language learner/second language learner (L1/L2) on the effectiveness of AI-enabled assessment tools in enhancing students' language learning outcome.</p>\\n </section>\\n \\n <section>\\n \\n <h3> Methods</h3>\\n \\n <p>This study conducted a systematic review and meta-analysis to examine 25 empirical studies from January 2012 to March 2024 from six databases (including EBSCO, ProQuest, Scopus, Web of Science, ACM Digital Library and CNKI).</p>\\n </section>\\n \\n <section>\\n \\n <h3> Results</h3>\\n \\n <p>The predominant design in AI-driven assessment tools is the structural AI architecture. These tools are most frequently deployed in classroom settings for upper primary students within a short duration. A subsequent meta-analysis showed a medium overall effect size (Hedges's <i>g</i> = 0.390, <i>p</i> < 0.001) for the application of AI-enabled assessment tools in enhancing students' language learning, underscoring their significant impact on language learning outcomes. This evidence robustly supports the practical utility of these tools in educational contexts.</p>\\n </section>\\n \\n <section>\\n \\n <h3> Conclusions</h3>\\n \\n <p>The analysis of several moderator variables (i.e., assessment type, intervention duration, educational level and L1/L2 learners) and potential impacts on language learning performance indicates that AI-enabled assessment could be more useful in language education with a proper implementation design. Future research could investigate diverse instructional designs for integrating AI-based assessment tools in language education.</p>\\n </section>\\n </div>\",\"PeriodicalId\":48071,\"journal\":{\"name\":\"Journal of Computer Assisted Learning\",\"volume\":\"41 1\",\"pages\":\"\"},\"PeriodicalIF\":5.1000,\"publicationDate\":\"2024-11-12\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of Computer Assisted Learning\",\"FirstCategoryId\":\"95\",\"ListUrlMain\":\"https://onlinelibrary.wiley.com/doi/10.1111/jcal.13064\",\"RegionNum\":2,\"RegionCategory\":\"教育学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"EDUCATION & EDUCATIONAL RESEARCH\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Computer Assisted Learning","FirstCategoryId":"95","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1111/jcal.13064","RegionNum":2,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"EDUCATION & EDUCATIONAL RESEARCH","Score":null,"Total":0}
A systematic review and meta-analysis of AI-enabled assessment in language learning: Design, implementation, and effectiveness
Background
Language assessment plays a pivotal role in language education, serving as a bridge between students' understanding and educators' instructional approaches. Recently, advancements in Artificial Intelligence (AI) technologies have introduced transformative possibilities for automating and personalising language assessments.
Objectives
This article aims to explore the design and implementation of AI-enabled assessment tools in language education, filling the research gaps regarding the impact of assessment type, intervention duration, education level, and first language learner/second language learner (L1/L2) on the effectiveness of AI-enabled assessment tools in enhancing students' language learning outcome.
Methods
This study conducted a systematic review and meta-analysis to examine 25 empirical studies from January 2012 to March 2024 from six databases (including EBSCO, ProQuest, Scopus, Web of Science, ACM Digital Library and CNKI).
Results
The predominant design in AI-driven assessment tools is the structural AI architecture. These tools are most frequently deployed in classroom settings for upper primary students within a short duration. A subsequent meta-analysis showed a medium overall effect size (Hedges's g = 0.390, p < 0.001) for the application of AI-enabled assessment tools in enhancing students' language learning, underscoring their significant impact on language learning outcomes. This evidence robustly supports the practical utility of these tools in educational contexts.
Conclusions
The analysis of several moderator variables (i.e., assessment type, intervention duration, educational level and L1/L2 learners) and potential impacts on language learning performance indicates that AI-enabled assessment could be more useful in language education with a proper implementation design. Future research could investigate diverse instructional designs for integrating AI-based assessment tools in language education.
期刊介绍:
The Journal of Computer Assisted Learning is an international peer-reviewed journal which covers the whole range of uses of information and communication technology to support learning and knowledge exchange. It aims to provide a medium for communication among researchers as well as a channel linking researchers, practitioners, and policy makers. JCAL is also a rich source of material for master and PhD students in areas such as educational psychology, the learning sciences, instructional technology, instructional design, collaborative learning, intelligent learning systems, learning analytics, open, distance and networked learning, and educational evaluation and assessment. This is the case for formal (e.g., schools), non-formal (e.g., workplace learning) and informal learning (e.g., museums and libraries) situations and environments. Volumes often include one Special Issue which these provides readers with a broad and in-depth perspective on a specific topic. First published in 1985, JCAL continues to have the aim of making the outcomes of contemporary research and experience accessible. During this period there have been major technological advances offering new opportunities and approaches in the use of a wide range of technologies to support learning and knowledge transfer more generally. There is currently much emphasis on the use of network functionality and the challenges its appropriate uses pose to teachers/tutors working with students locally and at a distance. JCAL welcomes: -Empirical reports, single studies or programmatic series of studies on the use of computers and information technologies in learning and assessment -Critical and original meta-reviews of literature on the use of computers for learning -Empirical studies on the design and development of innovative technology-based systems for learning -Conceptual articles on issues relating to the Aims and Scope