Semformer:具有语义规划功能的转换器语言模型

Yongjing Yin, Junran Ding, Kai Song, Yue Zhang
{"title":"Semformer:具有语义规划功能的转换器语言模型","authors":"Yongjing Yin, Junran Ding, Kai Song, Yue Zhang","doi":"arxiv-2409.11143","DOIUrl":null,"url":null,"abstract":"Next-token prediction serves as the dominant component in current neural\nlanguage models. During the training phase, the model employs teacher forcing,\nwhich predicts tokens based on all preceding ground truth tokens. However, this\napproach has been found to create shortcuts, utilizing the revealed prefix to\nspuriously fit future tokens, potentially compromising the accuracy of the\nnext-token predictor. In this paper, we introduce Semformer, a novel method of\ntraining a Transformer language model that explicitly models the semantic\nplanning of response. Specifically, we incorporate a sequence of planning\ntokens into the prefix, guiding the planning token representations to predict\nthe latent semantic representations of the response, which are induced by an\nautoencoder. In a minimal planning task (i.e., graph path-finding), our model\nexhibits near-perfect performance and effectively mitigates shortcut learning,\na feat that standard training methods and baseline models have been unable to\naccomplish. Furthermore, we pretrain Semformer from scratch with 125M\nparameters, demonstrating its efficacy through measures of perplexity,\nin-context learning, and fine-tuning on summarization tasks.","PeriodicalId":501030,"journal":{"name":"arXiv - CS - Computation and Language","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Semformer: Transformer Language Models with Semantic Planning\",\"authors\":\"Yongjing Yin, Junran Ding, Kai Song, Yue Zhang\",\"doi\":\"arxiv-2409.11143\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Next-token prediction serves as the dominant component in current neural\\nlanguage models. During the training phase, the model employs teacher forcing,\\nwhich predicts tokens based on all preceding ground truth tokens. However, this\\napproach has been found to create shortcuts, utilizing the revealed prefix to\\nspuriously fit future tokens, potentially compromising the accuracy of the\\nnext-token predictor. In this paper, we introduce Semformer, a novel method of\\ntraining a Transformer language model that explicitly models the semantic\\nplanning of response. Specifically, we incorporate a sequence of planning\\ntokens into the prefix, guiding the planning token representations to predict\\nthe latent semantic representations of the response, which are induced by an\\nautoencoder. In a minimal planning task (i.e., graph path-finding), our model\\nexhibits near-perfect performance and effectively mitigates shortcut learning,\\na feat that standard training methods and baseline models have been unable to\\naccomplish. Furthermore, we pretrain Semformer from scratch with 125M\\nparameters, demonstrating its efficacy through measures of perplexity,\\nin-context learning, and fine-tuning on summarization tasks.\",\"PeriodicalId\":501030,\"journal\":{\"name\":\"arXiv - CS - Computation and Language\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-09-17\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - CS - Computation and Language\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2409.11143\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Computation and Language","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.11143","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

下一个标记预测是当前神经语言模型的主要组成部分。在训练阶段,模型采用教师强制法,即根据前面所有的地面实况标记来预测标记。然而,人们发现这种方法会产生捷径,利用揭示的前缀来错误地拟合未来的标记,从而可能影响下一标记预测器的准确性。在本文中,我们介绍了 Semformer,这是一种训练 Transformer 语言模型的新方法,它可以明确地模拟响应的语义规划。具体来说,我们在前缀中加入了一系列规划标记,引导规划标记表征预测反应的潜在语义表征,这些潜在语义表征是由自动编码器诱导的。在最小规划任务(即图路径查找)中,我们的模式lex 表现出近乎完美的性能,并有效地减少了捷径学习,而这正是标准训练方法和基线模型所无法实现的。此外,我们还用 1.25 亿个参数对 Semformer 进行了从头开始的预训练,通过在摘要任务中的困惑度测量、上下文学习和微调,证明了它的功效。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Semformer: Transformer Language Models with Semantic Planning
Next-token prediction serves as the dominant component in current neural language models. During the training phase, the model employs teacher forcing, which predicts tokens based on all preceding ground truth tokens. However, this approach has been found to create shortcuts, utilizing the revealed prefix to spuriously fit future tokens, potentially compromising the accuracy of the next-token predictor. In this paper, we introduce Semformer, a novel method of training a Transformer language model that explicitly models the semantic planning of response. Specifically, we incorporate a sequence of planning tokens into the prefix, guiding the planning token representations to predict the latent semantic representations of the response, which are induced by an autoencoder. In a minimal planning task (i.e., graph path-finding), our model exhibits near-perfect performance and effectively mitigates shortcut learning, a feat that standard training methods and baseline models have been unable to accomplish. Furthermore, we pretrain Semformer from scratch with 125M parameters, demonstrating its efficacy through measures of perplexity, in-context learning, and fine-tuning on summarization tasks.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
LLMs + Persona-Plug = Personalized LLMs MEOW: MEMOry Supervised LLM Unlearning Via Inverted Facts Extract-and-Abstract: Unifying Extractive and Abstractive Summarization within Single Encoder-Decoder Framework Development and bilingual evaluation of Japanese medical large language model within reasonably low computational resources Human-like Affective Cognition in Foundation Models
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1