Ryosuke Nakamoto, B. Flanagan, Yiling Dai, Kyosuke Takami, H. Ogata
{"title":"在数学测验中生成包含知识成分的标准示例自我解释答案的无监督技术","authors":"Ryosuke Nakamoto, B. Flanagan, Yiling Dai, Kyosuke Takami, H. Ogata","doi":"10.58459/rptel.2024.19016","DOIUrl":null,"url":null,"abstract":"Self-explanation is a widely recognized and effective pedagogical method. Previous research has indicated that self-explanation can be used to evaluate students’ comprehension and identify their areas of difficulty on mathematical quizzes. However, most analytical techniques necessitate pre-labeled materials, which limits the potential for large-scale study. Conversely, utilizing collected self-explanations without supervision is challenging because there is little research on this topic. Therefore, this study aims to investigate the feasibility of automatically generating a standardized self-explanation sample answer from unsupervised collected self-explanations. The proposed model involves preprocessing and three machine learning steps: vectorization, clustering, and extraction. Experiments involving 1,434 self-explanation answers from 25 quizzes indicate that 72% of the quizzes generate sample answers containing all the necessary knowledge components. The similarity between human-generated and machine-generated sentences was significant with moderate positive correlation, r(23) = .48, p < .05.The best-performing generative model also achieved a high BERTScore of 0.715. Regarding the readability of the generated sample answers, the average score of the human-generated sentences was superior to that of the machine-generated ones. These results suggest that the proposed model can generate sample answers that contain critical knowledge components and can be further improved with BERTScore. This study is expected to have numerous applications, including identifying students’ areas of difficulty, scoring self-explanations, presenting students with reference materials for learning, and automatically generating scaffolding templates to train self-explanation skills.","PeriodicalId":37055,"journal":{"name":"Research and Practice in Technology Enhanced Learning","volume":"100 1","pages":"16"},"PeriodicalIF":3.1000,"publicationDate":"2023-08-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":"{\"title\":\"Unsupervised techniques for generating a standard sample self-explanation answer with knowledge components in a math quiz\",\"authors\":\"Ryosuke Nakamoto, B. Flanagan, Yiling Dai, Kyosuke Takami, H. Ogata\",\"doi\":\"10.58459/rptel.2024.19016\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Self-explanation is a widely recognized and effective pedagogical method. Previous research has indicated that self-explanation can be used to evaluate students’ comprehension and identify their areas of difficulty on mathematical quizzes. However, most analytical techniques necessitate pre-labeled materials, which limits the potential for large-scale study. Conversely, utilizing collected self-explanations without supervision is challenging because there is little research on this topic. Therefore, this study aims to investigate the feasibility of automatically generating a standardized self-explanation sample answer from unsupervised collected self-explanations. The proposed model involves preprocessing and three machine learning steps: vectorization, clustering, and extraction. Experiments involving 1,434 self-explanation answers from 25 quizzes indicate that 72% of the quizzes generate sample answers containing all the necessary knowledge components. The similarity between human-generated and machine-generated sentences was significant with moderate positive correlation, r(23) = .48, p < .05.The best-performing generative model also achieved a high BERTScore of 0.715. Regarding the readability of the generated sample answers, the average score of the human-generated sentences was superior to that of the machine-generated ones. These results suggest that the proposed model can generate sample answers that contain critical knowledge components and can be further improved with BERTScore. This study is expected to have numerous applications, including identifying students’ areas of difficulty, scoring self-explanations, presenting students with reference materials for learning, and automatically generating scaffolding templates to train self-explanation skills.\",\"PeriodicalId\":37055,\"journal\":{\"name\":\"Research and Practice in Technology Enhanced Learning\",\"volume\":\"100 1\",\"pages\":\"16\"},\"PeriodicalIF\":3.1000,\"publicationDate\":\"2023-08-16\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Research and Practice in Technology Enhanced Learning\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.58459/rptel.2024.19016\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"EDUCATION & EDUCATIONAL RESEARCH\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Research and Practice in Technology Enhanced Learning","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.58459/rptel.2024.19016","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"EDUCATION & EDUCATIONAL RESEARCH","Score":null,"Total":0}
引用次数: 2
摘要
自我解释是一种被广泛认可的有效的教学方法。先前的研究表明,自我解释可以用来评估学生的理解能力,并确定他们在数学测验中的困难所在。然而,大多数分析技术需要预先标记的材料,这限制了大规模研究的潜力。相反,在没有监督的情况下利用收集到的自我解释是具有挑战性的,因为关于这一主题的研究很少。因此,本研究旨在探讨从无监督收集的自我解释中自动生成标准化自我解释样本答案的可行性。该模型包括预处理和三个机器学习步骤:向量化、聚类和提取。来自25个测验的1434个自我解释答案的实验表明,72%的测验产生了包含所有必要知识成分的样本答案。人工生成的句子与机器生成的句子相似度显著,呈中度正相关,r(23) = .48, p < .05。表现最好的生成模型也获得了0.715的高BERTScore。在生成的样本答案的可读性方面,人工生成的句子的平均得分优于机器生成的句子。这些结果表明,所提出的模型可以生成包含关键知识成分的样本答案,并且可以通过BERTScore进一步改进。这项研究预计将有许多应用,包括识别学生的困难领域,自我解释评分,为学生提供学习参考材料,以及自动生成脚手架模板来训练自我解释技能。
Unsupervised techniques for generating a standard sample self-explanation answer with knowledge components in a math quiz
Self-explanation is a widely recognized and effective pedagogical method. Previous research has indicated that self-explanation can be used to evaluate students’ comprehension and identify their areas of difficulty on mathematical quizzes. However, most analytical techniques necessitate pre-labeled materials, which limits the potential for large-scale study. Conversely, utilizing collected self-explanations without supervision is challenging because there is little research on this topic. Therefore, this study aims to investigate the feasibility of automatically generating a standardized self-explanation sample answer from unsupervised collected self-explanations. The proposed model involves preprocessing and three machine learning steps: vectorization, clustering, and extraction. Experiments involving 1,434 self-explanation answers from 25 quizzes indicate that 72% of the quizzes generate sample answers containing all the necessary knowledge components. The similarity between human-generated and machine-generated sentences was significant with moderate positive correlation, r(23) = .48, p < .05.The best-performing generative model also achieved a high BERTScore of 0.715. Regarding the readability of the generated sample answers, the average score of the human-generated sentences was superior to that of the machine-generated ones. These results suggest that the proposed model can generate sample answers that contain critical knowledge components and can be further improved with BERTScore. This study is expected to have numerous applications, including identifying students’ areas of difficulty, scoring self-explanations, presenting students with reference materials for learning, and automatically generating scaffolding templates to train self-explanation skills.