基于bert的数学应用问题预训练模型

Yuhao Jia, Pingheng Wang, Zhen Zhang, Chi Cheng, Zhifei Li, Xinguo Yu
{"title":"基于bert的数学应用问题预训练模型","authors":"Yuhao Jia, Pingheng Wang, Zhen Zhang, Chi Cheng, Zhifei Li, Xinguo Yu","doi":"10.1109/IEIR56323.2022.10050073","DOIUrl":null,"url":null,"abstract":"Solving the math application problem is hot research in intelligence education. An increasing number of research scholars are using pre-trained models to tackle machine solution problems. Noteworthily, the semantic relationships required in the machine solution task are for describing math problems, while those of the BERT model with pre-training weights are of general significance, which will cause a mismatched word vector representation. To solve this problem, we proposed a self-supervised pre-training method based on loss priority. We use the input data from the downstream task datasets to fine-tune the existing BERT model so that the dynamic word vector it obtained can better match the downstream tasks. And the size of the loss value of each data batch in each round of training will be recorded to decide which data should be trained in the next round, so that the model has a faster convergence speed. Furthermore, considering that in large-scale mathematics application problems, some problems have almost the same forms of solution. We proposed a machine solution model training algorithm based on the analogy of the same problem type. Extensive experiments on two well-known datasets show the superiority of our proposed algorithms compared to other state-of-the-art algorithms.","PeriodicalId":183709,"journal":{"name":"2022 International Conference on Intelligent Education and Intelligent Research (IEIR)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"A BERT-Based Pre-Training Model for Solving Math Application Problems\",\"authors\":\"Yuhao Jia, Pingheng Wang, Zhen Zhang, Chi Cheng, Zhifei Li, Xinguo Yu\",\"doi\":\"10.1109/IEIR56323.2022.10050073\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Solving the math application problem is hot research in intelligence education. An increasing number of research scholars are using pre-trained models to tackle machine solution problems. Noteworthily, the semantic relationships required in the machine solution task are for describing math problems, while those of the BERT model with pre-training weights are of general significance, which will cause a mismatched word vector representation. To solve this problem, we proposed a self-supervised pre-training method based on loss priority. We use the input data from the downstream task datasets to fine-tune the existing BERT model so that the dynamic word vector it obtained can better match the downstream tasks. And the size of the loss value of each data batch in each round of training will be recorded to decide which data should be trained in the next round, so that the model has a faster convergence speed. Furthermore, considering that in large-scale mathematics application problems, some problems have almost the same forms of solution. We proposed a machine solution model training algorithm based on the analogy of the same problem type. Extensive experiments on two well-known datasets show the superiority of our proposed algorithms compared to other state-of-the-art algorithms.\",\"PeriodicalId\":183709,\"journal\":{\"name\":\"2022 International Conference on Intelligent Education and Intelligent Research (IEIR)\",\"volume\":\"1 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-12-18\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 International Conference on Intelligent Education and Intelligent Research (IEIR)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/IEIR56323.2022.10050073\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 International Conference on Intelligent Education and Intelligent Research (IEIR)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IEIR56323.2022.10050073","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

解决数学应用问题是智能教育领域的研究热点。越来越多的研究学者正在使用预训练的模型来解决机器解决问题。值得注意的是,机器解决任务中所需的语义关系是用于描述数学问题的,而具有预训练权值的BERT模型的语义关系具有一般意义,这将导致词向量表示不匹配。为了解决这一问题,我们提出了一种基于损失优先级的自监督预训练方法。我们使用来自下游任务数据集的输入数据对现有BERT模型进行微调,使其得到的动态词向量能够更好地匹配下游任务。并且记录每轮训练中每个数据批次的损失值大小,以决定下一轮训练哪些数据,使模型具有更快的收敛速度。此外,考虑到在大规模的数学应用问题中,有些问题具有几乎相同的解形式。提出了一种基于同类型问题类比的机器解模型训练算法。在两个知名数据集上进行的大量实验表明,与其他最先进的算法相比,我们提出的算法具有优越性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
A BERT-Based Pre-Training Model for Solving Math Application Problems
Solving the math application problem is hot research in intelligence education. An increasing number of research scholars are using pre-trained models to tackle machine solution problems. Noteworthily, the semantic relationships required in the machine solution task are for describing math problems, while those of the BERT model with pre-training weights are of general significance, which will cause a mismatched word vector representation. To solve this problem, we proposed a self-supervised pre-training method based on loss priority. We use the input data from the downstream task datasets to fine-tune the existing BERT model so that the dynamic word vector it obtained can better match the downstream tasks. And the size of the loss value of each data batch in each round of training will be recorded to decide which data should be trained in the next round, so that the model has a faster convergence speed. Furthermore, considering that in large-scale mathematics application problems, some problems have almost the same forms of solution. We proposed a machine solution model training algorithm based on the analogy of the same problem type. Extensive experiments on two well-known datasets show the superiority of our proposed algorithms compared to other state-of-the-art algorithms.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Is the Research on AI Empowered Pedagogy in China Decaying? Explore the interrelationship of cognition, emotion and interaction when learners engage in online discussion Solving Word Function Problems in Line with Educational Cognition Way Comparative Analysis of Problem Representation Learning in Math Word Problem Solving Prompt-Based Missing Entity Recovery for Solving Arithmetic Word Problems
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1