Method with Data to Text Generation Based on Selecting Encoding and Fusing Semantic Loss

Yuelin Chen, ZhuCheng Gao, XiaoDong Cai
{"title":"Method with Data to Text Generation Based on Selecting Encoding and Fusing Semantic Loss","authors":"Yuelin Chen, ZhuCheng Gao, XiaoDong Cai","doi":"10.1109/ISCEIC53685.2021.00038","DOIUrl":null,"url":null,"abstract":"A method with data to text generation based on selecting encoding and fusing semantic loss is proposed in this paper. By highlighting key content and reducing the redundancy of text description information, the quality of the generated text is significantly improved. First, a new selection network is designed, which uses the amount of information related to data records as the coding basis for content importance, and multiple rounds of dynamic iterations of the results to achieve accurate and comprehensive selection of important information. Secondly, in the decoding process using Long Short-Term Memory (LSTM), a hierarchical attention mechanism is designed to assign dynamic selection weights to different entities and their attributes in the hidden layer output to obtain the best generated text Recall rate. Finally, a method of calculating the semantic similarity loss between the generated text and the reference text is introduced. By calculating the cosine distance of the semantic vectors of the two and iteratively feedback to the training process to obtain the optimization of key features, while reducing the redundancy of description information and improving the model BLEU performance. The experimental results show that the test Precision rate, Recall rate and BLEU is up to 94.58%, 53.72% and 17.24, which are better than existing models.","PeriodicalId":342968,"journal":{"name":"2021 2nd International Symposium on Computer Engineering and Intelligent Communications (ISCEIC)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 2nd International Symposium on Computer Engineering and Intelligent Communications (ISCEIC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ISCEIC53685.2021.00038","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

A method with data to text generation based on selecting encoding and fusing semantic loss is proposed in this paper. By highlighting key content and reducing the redundancy of text description information, the quality of the generated text is significantly improved. First, a new selection network is designed, which uses the amount of information related to data records as the coding basis for content importance, and multiple rounds of dynamic iterations of the results to achieve accurate and comprehensive selection of important information. Secondly, in the decoding process using Long Short-Term Memory (LSTM), a hierarchical attention mechanism is designed to assign dynamic selection weights to different entities and their attributes in the hidden layer output to obtain the best generated text Recall rate. Finally, a method of calculating the semantic similarity loss between the generated text and the reference text is introduced. By calculating the cosine distance of the semantic vectors of the two and iteratively feedback to the training process to obtain the optimization of key features, while reducing the redundancy of description information and improving the model BLEU performance. The experimental results show that the test Precision rate, Recall rate and BLEU is up to 94.58%, 53.72% and 17.24, which are better than existing models.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
基于选择编码和融合语义丢失的数据到文本生成方法
提出了一种基于选择编码和融合语义损失的数据到文本生成方法。通过突出显示关键内容和减少文本描述信息的冗余,显著提高了生成文本的质量。首先,设计新的选择网络,以数据记录相关信息量作为内容重要性的编码依据,并对结果进行多轮动态迭代,实现重要信息的准确、全面选择。其次,在利用长短期记忆(LSTM)进行解码的过程中,设计了一种分层注意机制,对隐藏层输出中的不同实体及其属性动态分配选择权值,以获得最佳的生成文本召回率;最后,介绍了一种计算生成文本与参考文本之间语义相似度损失的方法。通过计算二者语义向量的余弦距离并迭代反馈到训练过程中,获得关键特征的优化,同时减少描述信息的冗余,提高模型BLEU性能。实验结果表明,该模型的测试准确率、召回率和BLEU分别达到94.58%、53.72%和17.24%,均优于现有模型。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Research on the Mechanical Zero Position Capture and Transfer of Steering Gear Based on Machine Vision Adaptive image watermarking algorithm based on visual characteristics Gaussian Image Denoising Method Based on the Dual Channel Deep Neural Network with the Skip Connection Design and Realization of Drum Level Control System for 300MW Unit New energy charging pile planning in residential area based on improved genetic algorithm
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1