Automated Assessment of Comprehension Strategies from Self-Explanations Using LLMs

IF 2.4 Q3 COMPUTER SCIENCE, INFORMATION SYSTEMS Information (Switzerland) Pub Date : 2023-10-14 DOI:10.3390/info14100567
Bogdan Nicula, Mihai Dascalu, Tracy Arner, Renu Balyan, Danielle S. McNamara
{"title":"Automated Assessment of Comprehension Strategies from Self-Explanations Using LLMs","authors":"Bogdan Nicula, Mihai Dascalu, Tracy Arner, Renu Balyan, Danielle S. McNamara","doi":"10.3390/info14100567","DOIUrl":null,"url":null,"abstract":"Text comprehension is an essential skill in today’s information-rich world, and self-explanation practice helps students improve their understanding of complex texts. This study was centered on leveraging open-source Large Language Models (LLMs), specifically FLAN-T5, to automatically assess the comprehension strategies employed by readers while understanding Science, Technology, Engineering, and Mathematics (STEM) texts. The experiments relied on a corpus of three datasets (N = 11,833) with self-explanations annotated on 4 dimensions: 3 comprehension strategies (i.e., bridging, elaboration, and paraphrasing) and overall quality. Besides FLAN-T5, we also considered GPT3.5-turbo to establish a stronger baseline. Our experiments indicated that the performance improved with fine-tuning, having a larger LLM model, and providing examples via the prompt. Our best model considered a pretrained FLAN-T5 XXL model and obtained a weighted F1-score of 0.721, surpassing the 0.699 F1-score previously obtained using smaller models (i.e., RoBERTa).","PeriodicalId":38479,"journal":{"name":"Information (Switzerland)","volume":"1 1","pages":"0"},"PeriodicalIF":2.4000,"publicationDate":"2023-10-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Information (Switzerland)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.3390/info14100567","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0

Abstract

Text comprehension is an essential skill in today’s information-rich world, and self-explanation practice helps students improve their understanding of complex texts. This study was centered on leveraging open-source Large Language Models (LLMs), specifically FLAN-T5, to automatically assess the comprehension strategies employed by readers while understanding Science, Technology, Engineering, and Mathematics (STEM) texts. The experiments relied on a corpus of three datasets (N = 11,833) with self-explanations annotated on 4 dimensions: 3 comprehension strategies (i.e., bridging, elaboration, and paraphrasing) and overall quality. Besides FLAN-T5, we also considered GPT3.5-turbo to establish a stronger baseline. Our experiments indicated that the performance improved with fine-tuning, having a larger LLM model, and providing examples via the prompt. Our best model considered a pretrained FLAN-T5 XXL model and obtained a weighted F1-score of 0.721, surpassing the 0.699 F1-score previously obtained using smaller models (i.e., RoBERTa).
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
基于llm的自我解释理解策略的自动评估
在当今信息丰富的世界中,文本理解是一项必不可少的技能,自我解释练习可以帮助学生提高对复杂文本的理解。本研究集中于利用开源大型语言模型(llm),特别是FLAN-T5,来自动评估读者在理解科学、技术、工程和数学(STEM)文本时采用的理解策略。实验依赖于三个数据集(N = 11,833)的语料库,这些数据集在4个维度上标注了自我解释:3种理解策略(即桥接、阐述和释义)和整体质量。除了FLAN-T5,我们还考虑了gpt3.5 turbo,以建立更强的基线。我们的实验表明,通过微调,拥有更大的LLM模型,并通过提示符提供示例,性能得到了提高。我们的最佳模型考虑了预训练的FLAN-T5 XXL模型,并获得了0.721的加权f1分数,超过了之前使用较小模型(即RoBERTa)获得的0.699 f1分数。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
Information (Switzerland)
Information (Switzerland) Computer Science-Information Systems
CiteScore
6.90
自引率
0.00%
发文量
515
审稿时长
11 weeks
期刊最新文献
Weakly Supervised Learning Approach for Implicit Aspect Extraction Science Mapping of Meta-Analysis in Agricultural Science An Integrated Time Series Prediction Model Based on Empirical Mode Decomposition and Two Attention Mechanisms Context-Aware Personalization: A Systems Engineering Framework Polarizing Topics on Twitter in the 2022 United States Elections
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1