Pub Date : 2023-11-03DOI: 10.1007/s40593-023-00370-1
Andrea Horbach, Joey Pehlke, Ronja Laarmann-Quante, Yuning Ding
Abstract This paper investigates crosslingual content scoring, a scenario where scoring models trained on learner data in one language are applied to data in a different language. We analyze data in five different languages (Chinese, English, French, German and Spanish) collected for three prompts of the established English ASAP content scoring dataset. We cross the language barrier by means of both shallow and deep learning crosslingual classification models using both machine translation and multilingual transformer models. We find that a combination of machine translation and multilingual models outperforms each method individually - our best results are reached when combining the available data in different languages, i.e. first training a model on the large English ASAP dataset before fine-tuning on smaller amounts of training data in the target language.
{"title":"Crosslingual Content Scoring in Five Languages Using Machine-Translation and Multilingual Transformer Models","authors":"Andrea Horbach, Joey Pehlke, Ronja Laarmann-Quante, Yuning Ding","doi":"10.1007/s40593-023-00370-1","DOIUrl":"https://doi.org/10.1007/s40593-023-00370-1","url":null,"abstract":"Abstract This paper investigates crosslingual content scoring, a scenario where scoring models trained on learner data in one language are applied to data in a different language. We analyze data in five different languages (Chinese, English, French, German and Spanish) collected for three prompts of the established English ASAP content scoring dataset. We cross the language barrier by means of both shallow and deep learning crosslingual classification models using both machine translation and multilingual transformer models. We find that a combination of machine translation and multilingual models outperforms each method individually - our best results are reached when combining the available data in different languages, i.e. first training a model on the large English ASAP dataset before fine-tuning on smaller amounts of training data in the target language.","PeriodicalId":46637,"journal":{"name":"International Journal of Artificial Intelligence in Education","volume":"12 4","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135818346","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-31DOI: 10.1007/s40593-023-00374-x
Said Al Faraby, Adiwijaya Adiwijaya, Ade Romadhony
Abstract Questioning plays a vital role in education, directing knowledge construction and assessing students’ understanding. However, creating high-level questions requires significant creativity and effort. Automatic question generation is expected to facilitate the generation of not only fluent and relevant but also educationally valuable questions. While rule-based methods are intuitive for short inputs, they struggle with longer and more complex inputs. Neural question generation (NQG) has shown better results in this regard. This review summarizes the advancements in NQG between 2016 and early 2022. The focus is on the development of NQG for educational purposes, including challenges and research opportunities. We found that although NQG can generate fluent and relevant factoid-type questions, few studies focus on education. Specifically, there is limited literature using context in the form of multi-paragraphs, which due to the input limitation of the current deep learning techniques, require key content identification. The desirable key content should be important to specific topics or learning objectives and be able to generate certain types of questions. A further research opportunity is controllable NQG systems, which can be customized by taking into account factors like difficulty level, desired answer type, and other individualized needs. Equally important, the results of our review also suggest that it is necessary to create datasets specific to the question generation tasks with annotations that support better learning for neural-based methods.
{"title":"Review on Neural Question Generation for Education Purposes","authors":"Said Al Faraby, Adiwijaya Adiwijaya, Ade Romadhony","doi":"10.1007/s40593-023-00374-x","DOIUrl":"https://doi.org/10.1007/s40593-023-00374-x","url":null,"abstract":"Abstract Questioning plays a vital role in education, directing knowledge construction and assessing students’ understanding. However, creating high-level questions requires significant creativity and effort. Automatic question generation is expected to facilitate the generation of not only fluent and relevant but also educationally valuable questions. While rule-based methods are intuitive for short inputs, they struggle with longer and more complex inputs. Neural question generation (NQG) has shown better results in this regard. This review summarizes the advancements in NQG between 2016 and early 2022. The focus is on the development of NQG for educational purposes, including challenges and research opportunities. We found that although NQG can generate fluent and relevant factoid-type questions, few studies focus on education. Specifically, there is limited literature using context in the form of multi-paragraphs, which due to the input limitation of the current deep learning techniques, require key content identification. The desirable key content should be important to specific topics or learning objectives and be able to generate certain types of questions. A further research opportunity is controllable NQG systems, which can be customized by taking into account factors like difficulty level, desired answer type, and other individualized needs. Equally important, the results of our review also suggest that it is necessary to create datasets specific to the question generation tasks with annotations that support better learning for neural-based methods.","PeriodicalId":46637,"journal":{"name":"International Journal of Artificial Intelligence in Education","volume":"67 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135872020","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-18DOI: 10.1007/s40593-023-00350-5
Ken Koedinger
{"title":"Navigating Ethical Benefits and Risks as AIED Comes of Age","authors":"Ken Koedinger","doi":"10.1007/s40593-023-00350-5","DOIUrl":"https://doi.org/10.1007/s40593-023-00350-5","url":null,"abstract":"","PeriodicalId":46637,"journal":{"name":"International Journal of Artificial Intelligence in Education","volume":"56 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135888795","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-09-27DOI: 10.1007/s40593-023-00371-0
Mark Abdelshiheed, Tiffany Barnes, Min Chi
{"title":"How and When: The Impact of Metacognitive Knowledge Instruction and Motivation on Transfer Across Intelligent Tutoring Systems","authors":"Mark Abdelshiheed, Tiffany Barnes, Min Chi","doi":"10.1007/s40593-023-00371-0","DOIUrl":"https://doi.org/10.1007/s40593-023-00371-0","url":null,"abstract":"","PeriodicalId":46637,"journal":{"name":"International Journal of Artificial Intelligence in Education","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135535047","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-09-22DOI: 10.1007/s40593-023-00368-9
Lonneke Boels, Enrique Garcia Moreno-Esteva, Arthur Bakker, Paul Drijvers
Abstract As a first step toward automatic feedback based on students’ strategies for solving histogram tasks we investigated how strategy recognition can be automated based on students’ gazes. A previous study showed how students’ task-specific strategies can be inferred from their gazes. The research question addressed in the present article is how data science tools (interpretable mathematical models and machine learning analyses) can be used to automatically identify students’ task-specific strategies from students’ gazes on single histograms. We report on a study of cognitive behavior that uses data science methods to analyze its data. The study consisted of three phases: (1) using a supervised machine learning algorithm (MLA) that provided a baseline for the next step, (2) designing an interpretable mathematical model (IMM), and (3) comparing the results. For the first phase, we used random forest as a classification method implemented in a software package (Wolfram Research Mathematica, ‘Classify Function’) that automates many aspects of the data handling, including creating features and initially choosing the MLA for this classification. The results of the random forests (1) provided a baseline to which we compared the results of our IMM (2). The previous study revealed that students’ horizontal or vertical gaze patterns on the graph area were indicative of most students’ strategies on single histograms. The IMM captures these in a model. The MLA (1) performed well but is a black box. The IMM (2) is transparent, performed well, and is theoretically meaningful. The comparison (3) showed that the MLA and IMM identified the same task-solving strategies. The results allow for the future design of teacher dashboards that report which students use what strategy, or for immediate, personalized feedback during online learning, homework, or massive open online courses (MOOCs) through measuring eye movements, for example, with a webcam.
作为基于学生解决直方图任务的策略自动反馈的第一步,我们研究了如何基于学生的注视自动识别策略。之前的一项研究表明,学生的特定任务策略可以从他们的目光中推断出来。本文解决的研究问题是如何使用数据科学工具(可解释的数学模型和机器学习分析)从学生对单个直方图的注视中自动识别学生的特定任务策略。我们报告了一项使用数据科学方法分析其数据的认知行为研究。该研究包括三个阶段:(1)使用监督机器学习算法(MLA)为下一步提供基线,(2)设计可解释的数学模型(IMM),(3)比较结果。在第一阶段,我们使用随机森林作为在软件包中实现的分类方法(Wolfram Research Mathematica,“classification Function”),该软件包自动化了数据处理的许多方面,包括创建特征和最初为该分类选择MLA。随机森林(1)的结果为我们比较IMM(2)的结果提供了一个基线。之前的研究表明,学生在图形区域的水平或垂直凝视模式表明了大多数学生在单个直方图上的策略。IMM在一个模型中捕获这些。MLA(1)表现良好,但却是一个黑匣子。IMM(2)是透明的,性能良好,具有理论意义。对比(3)表明,MLA和IMM识别出相同的任务解决策略。研究结果为未来教师仪表板的设计提供了依据,这些仪表板可以报告哪些学生使用了什么策略,或者在在线学习、家庭作业或大规模在线开放课程(MOOCs)期间,通过测量眼球运动(例如,使用网络摄像头),获得即时、个性化的反馈。
{"title":"Automated Gaze-Based Identification of Students’ Strategies in Histogram Tasks through an Interpretable Mathematical Model and a Machine Learning Algorithm","authors":"Lonneke Boels, Enrique Garcia Moreno-Esteva, Arthur Bakker, Paul Drijvers","doi":"10.1007/s40593-023-00368-9","DOIUrl":"https://doi.org/10.1007/s40593-023-00368-9","url":null,"abstract":"Abstract As a first step toward automatic feedback based on students’ strategies for solving histogram tasks we investigated how strategy recognition can be automated based on students’ gazes. A previous study showed how students’ task-specific strategies can be inferred from their gazes. The research question addressed in the present article is how data science tools (interpretable mathematical models and machine learning analyses) can be used to automatically identify students’ task-specific strategies from students’ gazes on single histograms. We report on a study of cognitive behavior that uses data science methods to analyze its data. The study consisted of three phases: (1) using a supervised machine learning algorithm (MLA) that provided a baseline for the next step, (2) designing an interpretable mathematical model (IMM), and (3) comparing the results. For the first phase, we used random forest as a classification method implemented in a software package (Wolfram Research Mathematica, ‘Classify Function’) that automates many aspects of the data handling, including creating features and initially choosing the MLA for this classification. The results of the random forests (1) provided a baseline to which we compared the results of our IMM (2). The previous study revealed that students’ horizontal or vertical gaze patterns on the graph area were indicative of most students’ strategies on single histograms. The IMM captures these in a model. The MLA (1) performed well but is a black box. The IMM (2) is transparent, performed well, and is theoretically meaningful. The comparison (3) showed that the MLA and IMM identified the same task-solving strategies. The results allow for the future design of teacher dashboards that report which students use what strategy, or for immediate, personalized feedback during online learning, homework, or massive open online courses (MOOCs) through measuring eye movements, for example, with a webcam.","PeriodicalId":46637,"journal":{"name":"International Journal of Artificial Intelligence in Education","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136059166","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-09-13DOI: 10.1007/s40593-023-00372-z
Joost C. F. de Winter
Abstract Launched in late November 2022, ChatGPT, a large language model chatbot, has garnered considerable attention. However, ongoing questions remain regarding its capabilities. In this study, ChatGPT was used to complete national high school exams in the Netherlands on the topic of English reading comprehension. In late December 2022, we submitted the exam questions through the ChatGPT web interface (GPT-3.5). According to official norms, ChatGPT achieved a mean grade of 7.3 on the Dutch scale of 1 to 10—comparable to the mean grade of all students who took the exam in the Netherlands, 6.99. However, ChatGPT occasionally required re-prompting to arrive at an explicit answer; without these nudges, the overall grade was 6.5. In March 2023, API access was made available, and a new version of ChatGPT, GPT-4, was released. We submitted the same exams to the API, and GPT-4 achieved a score of 8.3 without a need for re-prompting. Additionally, employing a bootstrapping method that incorporated randomness through ChatGPT’s ‘temperature’ parameter proved effective in self-identifying potentially incorrect answers. Finally, a re-assessment conducted with the GPT-4 model updated as of June 2023 showed no substantial change in the overall score. The present findings highlight significant opportunities but also raise concerns about the impact of ChatGPT and similar large language models on educational assessment.
{"title":"Can ChatGPT Pass High School Exams on English Language Comprehension?","authors":"Joost C. F. de Winter","doi":"10.1007/s40593-023-00372-z","DOIUrl":"https://doi.org/10.1007/s40593-023-00372-z","url":null,"abstract":"Abstract Launched in late November 2022, ChatGPT, a large language model chatbot, has garnered considerable attention. However, ongoing questions remain regarding its capabilities. In this study, ChatGPT was used to complete national high school exams in the Netherlands on the topic of English reading comprehension. In late December 2022, we submitted the exam questions through the ChatGPT web interface (GPT-3.5). According to official norms, ChatGPT achieved a mean grade of 7.3 on the Dutch scale of 1 to 10—comparable to the mean grade of all students who took the exam in the Netherlands, 6.99. However, ChatGPT occasionally required re-prompting to arrive at an explicit answer; without these nudges, the overall grade was 6.5. In March 2023, API access was made available, and a new version of ChatGPT, GPT-4, was released. We submitted the same exams to the API, and GPT-4 achieved a score of 8.3 without a need for re-prompting. Additionally, employing a bootstrapping method that incorporated randomness through ChatGPT’s ‘temperature’ parameter proved effective in self-identifying potentially incorrect answers. Finally, a re-assessment conducted with the GPT-4 model updated as of June 2023 showed no substantial change in the overall score. The present findings highlight significant opportunities but also raise concerns about the impact of ChatGPT and similar large language models on educational assessment.","PeriodicalId":46637,"journal":{"name":"International Journal of Artificial Intelligence in Education","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134990111","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-09-08DOI: 10.1007/s40593-023-00362-1
Samah AlKhuzaey, Floriana Grasso, Terry R. Payne, V. Tamma
{"title":"Text-based Question Difficulty Prediction: A Systematic Review of Automatic Approaches","authors":"Samah AlKhuzaey, Floriana Grasso, Terry R. Payne, V. Tamma","doi":"10.1007/s40593-023-00362-1","DOIUrl":"https://doi.org/10.1007/s40593-023-00362-1","url":null,"abstract":"","PeriodicalId":46637,"journal":{"name":"International Journal of Artificial Intelligence in Education","volume":"1 1","pages":""},"PeriodicalIF":4.9,"publicationDate":"2023-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41495891","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-08-21DOI: 10.1007/s40593-023-00338-1
Preya Shabrina, Behrooz Mostafavi, Mark Abdelshiheed, Min Chi, Tiffany Barnes
Abstract Learning to derive subgoals reduces the gap between experts and students and makes students prepared for future problem solving. Researchers have explored subgoal-labeled instructional materials in traditional problem solving and within tutoring systems to help novices learn to subgoal. However, only a little research is found on problem-solving strategies in relationship with subgoal learning. Also, these strategies are under-explored within computer-based tutors and learning environments. The backward problem-solving strategy is closely related to the process of subgoaling, where problem solving iteratively refines the goal into a new subgoal to reduce difficulty. In this paper, we explore a training strategy for backward strategy learning within an intelligent logic tutor that teaches logic-proof construction. The training session involved backward worked examples (BWE) and problem solving (BPS) to help students learn backward strategy towards improving their subgoaling and problem-solving skills. To evaluate the training strategy, we analyzed students’ 1) experience with and engagement in learning backward strategy, 2) performance and 3) proof construction approaches in new problems that they solved independently without tutor help after each level of training and in posttest. Our results showed that, when new problems were given to solve without any tutor help, students who were trained with both BWE and BPS outperformed students who received none of the treatment or only BWE during training. Additionally, students trained with both BWE and BPS derived subgoals during proof construction with significantly higher efficiency than the other two groups.
{"title":"Investigating the Impact of Backward Strategy Learning in a Logic Tutor: Aiding Subgoal Learning Towards Improved Problem Solving","authors":"Preya Shabrina, Behrooz Mostafavi, Mark Abdelshiheed, Min Chi, Tiffany Barnes","doi":"10.1007/s40593-023-00338-1","DOIUrl":"https://doi.org/10.1007/s40593-023-00338-1","url":null,"abstract":"Abstract Learning to derive subgoals reduces the gap between experts and students and makes students prepared for future problem solving. Researchers have explored subgoal-labeled instructional materials in traditional problem solving and within tutoring systems to help novices learn to subgoal. However, only a little research is found on problem-solving strategies in relationship with subgoal learning. Also, these strategies are under-explored within computer-based tutors and learning environments. The backward problem-solving strategy is closely related to the process of subgoaling, where problem solving iteratively refines the goal into a new subgoal to reduce difficulty. In this paper, we explore a training strategy for backward strategy learning within an intelligent logic tutor that teaches logic-proof construction. The training session involved backward worked examples (BWE) and problem solving (BPS) to help students learn backward strategy towards improving their subgoaling and problem-solving skills. To evaluate the training strategy, we analyzed students’ 1) experience with and engagement in learning backward strategy, 2) performance and 3) proof construction approaches in new problems that they solved independently without tutor help after each level of training and in posttest. Our results showed that, when new problems were given to solve without any tutor help, students who were trained with both BWE and BPS outperformed students who received none of the treatment or only BWE during training. Additionally, students trained with both BWE and BPS derived subgoals during proof construction with significantly higher efficiency than the other two groups.","PeriodicalId":46637,"journal":{"name":"International Journal of Artificial Intelligence in Education","volume":"180 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135772362","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-08-21DOI: 10.1007/s40593-023-00356-z
Peter Brusilovsky
{"title":"AI in Education, Learner Control, and Human-AI Collaboration","authors":"Peter Brusilovsky","doi":"10.1007/s40593-023-00356-z","DOIUrl":"https://doi.org/10.1007/s40593-023-00356-z","url":null,"abstract":"","PeriodicalId":46637,"journal":{"name":"International Journal of Artificial Intelligence in Education","volume":" ","pages":""},"PeriodicalIF":4.9,"publicationDate":"2023-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46120608","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-08-18DOI: 10.1007/s40593-023-00369-8
D. McNamara
{"title":"Correction to: AIED: From Cognitive Simulations to Learning Engineering, with Humans in the Middle","authors":"D. McNamara","doi":"10.1007/s40593-023-00369-8","DOIUrl":"https://doi.org/10.1007/s40593-023-00369-8","url":null,"abstract":"","PeriodicalId":46637,"journal":{"name":"International Journal of Artificial Intelligence in Education","volume":" ","pages":""},"PeriodicalIF":4.9,"publicationDate":"2023-08-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46073057","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}