在数学测验中生成包含知识成分的标准示例自我解释答案的无监督技术

IF 3.1 Q1 EDUCATION & EDUCATIONAL RESEARCH Research and Practice in Technology Enhanced Learning Pub Date : 2023-08-16 DOI:10.58459/rptel.2024.19016
Ryosuke Nakamoto, B. Flanagan, Yiling Dai, Kyosuke Takami, H. Ogata
{"title":"在数学测验中生成包含知识成分的标准示例自我解释答案的无监督技术","authors":"Ryosuke Nakamoto, B. Flanagan, Yiling Dai, Kyosuke Takami, H. Ogata","doi":"10.58459/rptel.2024.19016","DOIUrl":null,"url":null,"abstract":"Self-explanation is a widely recognized and effective pedagogical method. Previous research has indicated that self-explanation can be used to evaluate students’ comprehension and identify their areas of difficulty on mathematical quizzes. However, most analytical techniques necessitate pre-labeled materials, which limits the potential for large-scale study. Conversely, utilizing collected self-explanations without supervision is challenging because there is little research on this topic. Therefore, this study aims to investigate the feasibility of automatically generating a standardized self-explanation sample answer from unsupervised collected self-explanations. The proposed model involves preprocessing and three machine learning steps: vectorization, clustering, and extraction. Experiments involving 1,434 self-explanation answers from 25 quizzes indicate that 72% of the quizzes generate sample answers containing all the necessary knowledge components. The similarity between human-generated and machine-generated sentences was significant with moderate positive correlation, r(23) = .48, p < .05.The best-performing generative model also achieved a high BERTScore of 0.715. Regarding the readability of the generated sample answers, the average score of the human-generated sentences was superior to that of the machine-generated ones. These results suggest that the proposed model can generate sample answers that contain critical knowledge components and can be further improved with BERTScore. This study is expected to have numerous applications, including identifying students’ areas of difficulty, scoring self-explanations, presenting students with reference materials for learning, and automatically generating scaffolding templates to train self-explanation skills.","PeriodicalId":37055,"journal":{"name":"Research and Practice in Technology Enhanced Learning","volume":"100 1","pages":"16"},"PeriodicalIF":3.1000,"publicationDate":"2023-08-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":"{\"title\":\"Unsupervised techniques for generating a standard sample self-explanation answer with knowledge components in a math quiz\",\"authors\":\"Ryosuke Nakamoto, B. Flanagan, Yiling Dai, Kyosuke Takami, H. Ogata\",\"doi\":\"10.58459/rptel.2024.19016\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Self-explanation is a widely recognized and effective pedagogical method. Previous research has indicated that self-explanation can be used to evaluate students’ comprehension and identify their areas of difficulty on mathematical quizzes. However, most analytical techniques necessitate pre-labeled materials, which limits the potential for large-scale study. Conversely, utilizing collected self-explanations without supervision is challenging because there is little research on this topic. Therefore, this study aims to investigate the feasibility of automatically generating a standardized self-explanation sample answer from unsupervised collected self-explanations. The proposed model involves preprocessing and three machine learning steps: vectorization, clustering, and extraction. Experiments involving 1,434 self-explanation answers from 25 quizzes indicate that 72% of the quizzes generate sample answers containing all the necessary knowledge components. The similarity between human-generated and machine-generated sentences was significant with moderate positive correlation, r(23) = .48, p < .05.The best-performing generative model also achieved a high BERTScore of 0.715. Regarding the readability of the generated sample answers, the average score of the human-generated sentences was superior to that of the machine-generated ones. These results suggest that the proposed model can generate sample answers that contain critical knowledge components and can be further improved with BERTScore. This study is expected to have numerous applications, including identifying students’ areas of difficulty, scoring self-explanations, presenting students with reference materials for learning, and automatically generating scaffolding templates to train self-explanation skills.\",\"PeriodicalId\":37055,\"journal\":{\"name\":\"Research and Practice in Technology Enhanced Learning\",\"volume\":\"100 1\",\"pages\":\"16\"},\"PeriodicalIF\":3.1000,\"publicationDate\":\"2023-08-16\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Research and Practice in Technology Enhanced Learning\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.58459/rptel.2024.19016\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"EDUCATION & EDUCATIONAL RESEARCH\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Research and Practice in Technology Enhanced Learning","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.58459/rptel.2024.19016","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"EDUCATION & EDUCATIONAL RESEARCH","Score":null,"Total":0}
引用次数: 2

摘要

自我解释是一种被广泛认可的有效的教学方法。先前的研究表明,自我解释可以用来评估学生的理解能力,并确定他们在数学测验中的困难所在。然而,大多数分析技术需要预先标记的材料,这限制了大规模研究的潜力。相反,在没有监督的情况下利用收集到的自我解释是具有挑战性的,因为关于这一主题的研究很少。因此,本研究旨在探讨从无监督收集的自我解释中自动生成标准化自我解释样本答案的可行性。该模型包括预处理和三个机器学习步骤:向量化、聚类和提取。来自25个测验的1434个自我解释答案的实验表明,72%的测验产生了包含所有必要知识成分的样本答案。人工生成的句子与机器生成的句子相似度显著,呈中度正相关,r(23) = .48, p < .05。表现最好的生成模型也获得了0.715的高BERTScore。在生成的样本答案的可读性方面,人工生成的句子的平均得分优于机器生成的句子。这些结果表明,所提出的模型可以生成包含关键知识成分的样本答案,并且可以通过BERTScore进一步改进。这项研究预计将有许多应用,包括识别学生的困难领域,自我解释评分,为学生提供学习参考材料,以及自动生成脚手架模板来训练自我解释技能。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Unsupervised techniques for generating a standard sample self-explanation answer with knowledge components in a math quiz
Self-explanation is a widely recognized and effective pedagogical method. Previous research has indicated that self-explanation can be used to evaluate students’ comprehension and identify their areas of difficulty on mathematical quizzes. However, most analytical techniques necessitate pre-labeled materials, which limits the potential for large-scale study. Conversely, utilizing collected self-explanations without supervision is challenging because there is little research on this topic. Therefore, this study aims to investigate the feasibility of automatically generating a standardized self-explanation sample answer from unsupervised collected self-explanations. The proposed model involves preprocessing and three machine learning steps: vectorization, clustering, and extraction. Experiments involving 1,434 self-explanation answers from 25 quizzes indicate that 72% of the quizzes generate sample answers containing all the necessary knowledge components. The similarity between human-generated and machine-generated sentences was significant with moderate positive correlation, r(23) = .48, p < .05.The best-performing generative model also achieved a high BERTScore of 0.715. Regarding the readability of the generated sample answers, the average score of the human-generated sentences was superior to that of the machine-generated ones. These results suggest that the proposed model can generate sample answers that contain critical knowledge components and can be further improved with BERTScore. This study is expected to have numerous applications, including identifying students’ areas of difficulty, scoring self-explanations, presenting students with reference materials for learning, and automatically generating scaffolding templates to train self-explanation skills.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
7.10
自引率
3.10%
发文量
28
审稿时长
13 weeks
期刊最新文献
An investigation of students and teachers’ new media literacy: the contributing characteristics with the moderator role of gender Classroom implementation of an auxiliary problem presentation system for mechanics adapted to learners’ errors Correlation among game addiction, achievement emotion, and learning motivation: A study of Indonesian youth in the context of e-learning system The influence of gender on STEM career choice: A partial least squares analysis “I felt like I was on campus” creating a situated learning environment through Instagram
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1