Towards an Appropriate Query, Key, and Value Computation for Knowledge Tracing

Youngduck Choi, Youngnam Lee, Junghyun Cho, Jineon Baek, Byungsoo Kim, Yeongmin Cha, Dongmin Shin, Chan Bae, Jaewe Heo
{"title":"Towards an Appropriate Query, Key, and Value Computation for Knowledge Tracing","authors":"Youngduck Choi, Youngnam Lee, Junghyun Cho, Jineon Baek, Byungsoo Kim, Yeongmin Cha, Dongmin Shin, Chan Bae, Jaewe Heo","doi":"10.1145/3386527.3405945","DOIUrl":null,"url":null,"abstract":"In this paper, we propose a novel Transformer-based model for knowledge tracing, SAINT: Separated Self-AttentIve Neural Knowledge Tracing. SAINT has an encoder-decoder structure where the exercise and response embedding sequences separately enter, respectively, the encoder and the decoder. The encoder applies self-attention layers to the sequence of exercise embeddings, and the decoder alternately applies self-attention layers and encoder-decoder attention layers to the sequence of response embeddings. This separation of input allows us to stack attention layers multiple times, resulting in an improvement in area under receiver operating characteristic curve (AUC). To the best of our knowledge, this is the first work to suggest an encoder-decoder model for knowledge tracing that applies deep self-attentive layers to exercises and responses separately. We empirically evaluate SAINT on a large-scale knowledge tracing dataset, EdNet, collected by an active mobile education application, Santa, which has 627,347 users, 72,907,005 response data points as well as a set of 16,175 exercises gathered since 2016. The results show that SAINT achieves state-of-the-art performance in knowledge tracing with an improvement of 1.8% in AUC compared to the current state-of-the-art model.","PeriodicalId":20608,"journal":{"name":"Proceedings of the Seventh ACM Conference on Learning @ Scale","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2020-02-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"96","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the Seventh ACM Conference on Learning @ Scale","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3386527.3405945","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 96

Abstract

In this paper, we propose a novel Transformer-based model for knowledge tracing, SAINT: Separated Self-AttentIve Neural Knowledge Tracing. SAINT has an encoder-decoder structure where the exercise and response embedding sequences separately enter, respectively, the encoder and the decoder. The encoder applies self-attention layers to the sequence of exercise embeddings, and the decoder alternately applies self-attention layers and encoder-decoder attention layers to the sequence of response embeddings. This separation of input allows us to stack attention layers multiple times, resulting in an improvement in area under receiver operating characteristic curve (AUC). To the best of our knowledge, this is the first work to suggest an encoder-decoder model for knowledge tracing that applies deep self-attentive layers to exercises and responses separately. We empirically evaluate SAINT on a large-scale knowledge tracing dataset, EdNet, collected by an active mobile education application, Santa, which has 627,347 users, 72,907,005 response data points as well as a set of 16,175 exercises gathered since 2016. The results show that SAINT achieves state-of-the-art performance in knowledge tracing with an improvement of 1.8% in AUC compared to the current state-of-the-art model.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
面向知识跟踪的查询、键和值计算
本文提出了一种新的基于transformer的知识跟踪模型SAINT:分离式自关注神经知识跟踪。SAINT具有编码器-解码器结构,其中练习和响应嵌入序列分别进入编码器和解码器。编码器将自注意层应用于练习嵌入序列,解码器将自注意层和编码器-解码器注意层交替应用于响应嵌入序列。这种输入分离使我们能够多次叠加注意层,从而提高接收器工作特性曲线(AUC)下的面积。据我们所知,这是第一个提出知识追踪的编码器-解码器模型的工作,该模型将深度自关注层分别应用于练习和响应。我们在一个大规模的知识追踪数据集EdNet上对SAINT进行了实证评估,该数据集由一个活跃的移动教育应用程序Santa收集,该应用程序拥有627,347个用户,72,907,005个响应数据点以及自2016年以来收集的一组16,175个练习。结果表明,与目前最先进的模型相比,SAINT在知识追踪方面达到了最先进的性能,AUC提高了1.8%。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Trust, Sustainability and [email protected] L@S'22: Ninth ACM Conference on Learning @ Scale, New York City, NY, USA, June 1 - 3, 2022 L@S'21: Eighth ACM Conference on Learning @ Scale, Virtual Event, Germany, June 22-25, 2021 Leveraging Book Indexes for Automatic Extraction of Concepts in MOOCs Evaluating Bayesian Knowledge Tracing for Estimating Learner Proficiency and Guiding Learner Behavior
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1