{"title":"Automatic RNN Cell Design for Knowledge Tracing using Reinforcement Learning","authors":"Xinyi Ding, Eric C. Larson","doi":"10.1145/3386527.3406729","DOIUrl":null,"url":null,"abstract":"Empirical results have shown that deep neural networks achieve superior performance in the application of Knowledge Tracing. However, the design of recurrent cells like long short term memory (LSTM) cells or gated recurrent units (GRU) is influenced largely by applications in natural language processing. They were proposed and evaluated in the context of sequence to sequence modeling, like machine translation. Even though the LSTM cell works well for knowledge tracing, it is unknown if its architecture is ideally suited for knowledge tracing. Despite the fact that there are several recurrent neural network based architectures proposed for knowledge tracing, the methodologies rely on empirical observations and trial and error, which may not be efficient or scalable. In this study, we investigate using reinforcement learning for the automatic design of recurrent neural network cells for knowledge tracing, showing improved performance compared to the LSTM cell. We also discuss a potential method for model regularization using neural architecture search.","PeriodicalId":20608,"journal":{"name":"Proceedings of the Seventh ACM Conference on Learning @ Scale","volume":"21 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2020-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the Seventh ACM Conference on Learning @ Scale","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3386527.3406729","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 4
Abstract
Empirical results have shown that deep neural networks achieve superior performance in the application of Knowledge Tracing. However, the design of recurrent cells like long short term memory (LSTM) cells or gated recurrent units (GRU) is influenced largely by applications in natural language processing. They were proposed and evaluated in the context of sequence to sequence modeling, like machine translation. Even though the LSTM cell works well for knowledge tracing, it is unknown if its architecture is ideally suited for knowledge tracing. Despite the fact that there are several recurrent neural network based architectures proposed for knowledge tracing, the methodologies rely on empirical observations and trial and error, which may not be efficient or scalable. In this study, we investigate using reinforcement learning for the automatic design of recurrent neural network cells for knowledge tracing, showing improved performance compared to the LSTM cell. We also discuss a potential method for model regularization using neural architecture search.