{"title":"Comparative Analysis of Problem Representation Learning in Math Word Problem Solving","authors":"Bin He, Guanghua Liang, Shengnan Chen, Kewen Pan, Zhangwen Miao, Litian Huang","doi":"10.1109/IEIR56323.2022.10050067","DOIUrl":null,"url":null,"abstract":"For developing a math word problem (MWP) solver, the problem text is usually modeled as a word sequence to put into a recursive neural network to capture the quantity relationships presented by the text. Recently, more and more researchers leverage graph-based models for problem representation learning and significant improvements are claimed to have achieved. To explore the potential effectiveness of presentation learning methods on diverse characteristics of benchmark datasets, a comparative analysis of problem representation learning is conducted in this paper. The framework of typical representation learning methods are studied and comparative experiments are implemented to reveal the performance variations in solving different types of math word problems. Experimental results show that, compared to sequence-based problem learning, there is no significant performance improvement after applying graphbased learning methods.","PeriodicalId":183709,"journal":{"name":"2022 International Conference on Intelligent Education and Intelligent Research (IEIR)","volume":"156 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 International Conference on Intelligent Education and Intelligent Research (IEIR)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IEIR56323.2022.10050067","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
For developing a math word problem (MWP) solver, the problem text is usually modeled as a word sequence to put into a recursive neural network to capture the quantity relationships presented by the text. Recently, more and more researchers leverage graph-based models for problem representation learning and significant improvements are claimed to have achieved. To explore the potential effectiveness of presentation learning methods on diverse characteristics of benchmark datasets, a comparative analysis of problem representation learning is conducted in this paper. The framework of typical representation learning methods are studied and comparative experiments are implemented to reveal the performance variations in solving different types of math word problems. Experimental results show that, compared to sequence-based problem learning, there is no significant performance improvement after applying graphbased learning methods.