Graph-to-Text Generation Combining Directed and Undirected Structural Information in Knowledge Graphs*

Q3 Arts and Humanities Icon Pub Date : 2023-03-01 DOI:10.1109/ICNLP58431.2023.00064
Hongda Gong, Shimin Shan, Hongkui Wei
{"title":"Graph-to-Text Generation Combining Directed and Undirected Structural Information in Knowledge Graphs*","authors":"Hongda Gong, Shimin Shan, Hongkui Wei","doi":"10.1109/ICNLP58431.2023.00064","DOIUrl":null,"url":null,"abstract":"Graph-to-text generation task is transforms knowledge graphs into natural language. In current research, pretrained language models(PLMs) have shown better performance than structured graph encoders in the generation task. Currently, PLMs serialise knowledge graphs mostly by transforming them into undirected graph structures. The advantage of an undirected graph structure is that it provides a more comprehensive representation of the information in knowledge graph, but it is difficult to capture the dependencies between entities, so the information represented may not be accurate. Therefore, We use four types of positional embedding to obtain both the directed and undirected structure of the knowledge graph, so that we can more fully represent the information in knowledge graph, and the dependencies between entities. We then add a semantic aggregation module to the Transformer layer of PLMs, which is used to obtain a more comprehensive representation of the information in knowledge graph, as well as to capture the dependencies between entities. Thus, our approach combines the advantages of both directed and undirected structural information. In addition, our new approach is more capable of capturing generic knowledge and can show better results with small samples of data.","PeriodicalId":53637,"journal":{"name":"Icon","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Icon","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICNLP58431.2023.00064","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"Arts and Humanities","Score":null,"Total":0}
引用次数: 0

Abstract

Graph-to-text generation task is transforms knowledge graphs into natural language. In current research, pretrained language models(PLMs) have shown better performance than structured graph encoders in the generation task. Currently, PLMs serialise knowledge graphs mostly by transforming them into undirected graph structures. The advantage of an undirected graph structure is that it provides a more comprehensive representation of the information in knowledge graph, but it is difficult to capture the dependencies between entities, so the information represented may not be accurate. Therefore, We use four types of positional embedding to obtain both the directed and undirected structure of the knowledge graph, so that we can more fully represent the information in knowledge graph, and the dependencies between entities. We then add a semantic aggregation module to the Transformer layer of PLMs, which is used to obtain a more comprehensive representation of the information in knowledge graph, as well as to capture the dependencies between entities. Thus, our approach combines the advantages of both directed and undirected structural information. In addition, our new approach is more capable of capturing generic knowledge and can show better results with small samples of data.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
结合知识图中有向和无向结构信息的图到文本生成*
图-文本生成任务是将知识图转换为自然语言。在目前的研究中,预训练语言模型(PLMs)在生成任务中表现出比结构化图编码器更好的性能。目前,plm序列化知识图的方法主要是将知识图转换为无向图结构。无向图结构的优点是它提供了更全面的知识图信息表示,但难以捕获实体之间的依赖关系,因此表示的信息可能不准确。因此,我们使用四种类型的位置嵌入来获得知识图的有向和无向结构,从而更充分地表示知识图中的信息,以及实体之间的依赖关系。然后,我们在plm的Transformer层添加语义聚合模块,该模块用于在知识图中获得更全面的信息表示,以及捕获实体之间的依赖关系。因此,我们的方法结合了有向和无向结构信息的优点。此外,我们的新方法更有能力捕获通用知识,并且可以在小样本数据中显示更好的结果。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
Icon
Icon Arts and Humanities-History and Philosophy of Science
CiteScore
0.30
自引率
0.00%
发文量
0
期刊最新文献
Long-term Coherent Accumulation Algorithm Based on Radar Altimeter Deep Composite Kernels ELM Based on Spatial Feature Extraction for Hyperspectral Vegetation Image Classification Research based on improved SSD target detection algorithm CON-GAN-BERT: combining Contrastive Learning with Generative Adversarial Nets for Few-Shot Sentiment Classification A Two Stage Learning Algorithm for Hyperspectral Image Classification
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1