{"title":"结合知识图中有向和无向结构信息的图到文本生成*","authors":"Hongda Gong, Shimin Shan, Hongkui Wei","doi":"10.1109/ICNLP58431.2023.00064","DOIUrl":null,"url":null,"abstract":"Graph-to-text generation task is transforms knowledge graphs into natural language. In current research, pretrained language models(PLMs) have shown better performance than structured graph encoders in the generation task. Currently, PLMs serialise knowledge graphs mostly by transforming them into undirected graph structures. The advantage of an undirected graph structure is that it provides a more comprehensive representation of the information in knowledge graph, but it is difficult to capture the dependencies between entities, so the information represented may not be accurate. Therefore, We use four types of positional embedding to obtain both the directed and undirected structure of the knowledge graph, so that we can more fully represent the information in knowledge graph, and the dependencies between entities. We then add a semantic aggregation module to the Transformer layer of PLMs, which is used to obtain a more comprehensive representation of the information in knowledge graph, as well as to capture the dependencies between entities. Thus, our approach combines the advantages of both directed and undirected structural information. In addition, our new approach is more capable of capturing generic knowledge and can show better results with small samples of data.","PeriodicalId":53637,"journal":{"name":"Icon","volume":"2014 1","pages":"313-318"},"PeriodicalIF":0.0000,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Graph-to-Text Generation Combining Directed and Undirected Structural Information in Knowledge Graphs*\",\"authors\":\"Hongda Gong, Shimin Shan, Hongkui Wei\",\"doi\":\"10.1109/ICNLP58431.2023.00064\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Graph-to-text generation task is transforms knowledge graphs into natural language. In current research, pretrained language models(PLMs) have shown better performance than structured graph encoders in the generation task. Currently, PLMs serialise knowledge graphs mostly by transforming them into undirected graph structures. The advantage of an undirected graph structure is that it provides a more comprehensive representation of the information in knowledge graph, but it is difficult to capture the dependencies between entities, so the information represented may not be accurate. Therefore, We use four types of positional embedding to obtain both the directed and undirected structure of the knowledge graph, so that we can more fully represent the information in knowledge graph, and the dependencies between entities. We then add a semantic aggregation module to the Transformer layer of PLMs, which is used to obtain a more comprehensive representation of the information in knowledge graph, as well as to capture the dependencies between entities. Thus, our approach combines the advantages of both directed and undirected structural information. In addition, our new approach is more capable of capturing generic knowledge and can show better results with small samples of data.\",\"PeriodicalId\":53637,\"journal\":{\"name\":\"Icon\",\"volume\":\"2014 1\",\"pages\":\"313-318\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-03-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Icon\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICNLP58431.2023.00064\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"Arts and Humanities\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Icon","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICNLP58431.2023.00064","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"Arts and Humanities","Score":null,"Total":0}
Graph-to-Text Generation Combining Directed and Undirected Structural Information in Knowledge Graphs*
Graph-to-text generation task is transforms knowledge graphs into natural language. In current research, pretrained language models(PLMs) have shown better performance than structured graph encoders in the generation task. Currently, PLMs serialise knowledge graphs mostly by transforming them into undirected graph structures. The advantage of an undirected graph structure is that it provides a more comprehensive representation of the information in knowledge graph, but it is difficult to capture the dependencies between entities, so the information represented may not be accurate. Therefore, We use four types of positional embedding to obtain both the directed and undirected structure of the knowledge graph, so that we can more fully represent the information in knowledge graph, and the dependencies between entities. We then add a semantic aggregation module to the Transformer layer of PLMs, which is used to obtain a more comprehensive representation of the information in knowledge graph, as well as to capture the dependencies between entities. Thus, our approach combines the advantages of both directed and undirected structural information. In addition, our new approach is more capable of capturing generic knowledge and can show better results with small samples of data.