{"title":"将结构信息与语义信息相结合的知识图谱补全方法","authors":"Binhao Hu;Jianpeng Zhang;Hongchang Chen","doi":"10.23919/cje.2022.00.299","DOIUrl":null,"url":null,"abstract":"With the development of knowledge graphs, a series of applications based on knowledge graphs have emerged. The incompleteness of knowledge graphs makes the effect of the downstream applications affected by the quality of the knowledge graphs. To improve the quality of knowledge graphs, translation-based graph embeddings such as TransE, learn structural information by representing triples as low-dimensional dense vectors. However, it is difficult to generalize to the unseen entities that are not observed during training but appear during testing. Other methods use the powerful representational ability of pre-trained language models to learn entity descriptions and contextual representation of triples. Although they are robust to incompleteness, they need to calculate the score of all candidate entities for each triple during inference. We consider combining two models to enhance the robustness of unseen entities by semantic information, and prevent combined explosion by reducing inference overhead through structured information. We use a pre-training language model to code triples and learn the semantic information within them, and use a hyperbolic space-based distance model to learn structural information, then integrate the two types of information together. We evaluate our model by performing link prediction experiments on standard datasets. The experimental results show that our model achieves better performances than state-of-the-art methods on two standard datasets.","PeriodicalId":50701,"journal":{"name":"Chinese Journal of Electronics","volume":"33 6","pages":"1412-1420"},"PeriodicalIF":1.6000,"publicationDate":"2024-11-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10748382","citationCount":"0","resultStr":"{\"title\":\"Knowledge Graph Completion Method of Combining Structural Information with Semantic Information\",\"authors\":\"Binhao Hu;Jianpeng Zhang;Hongchang Chen\",\"doi\":\"10.23919/cje.2022.00.299\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"With the development of knowledge graphs, a series of applications based on knowledge graphs have emerged. The incompleteness of knowledge graphs makes the effect of the downstream applications affected by the quality of the knowledge graphs. To improve the quality of knowledge graphs, translation-based graph embeddings such as TransE, learn structural information by representing triples as low-dimensional dense vectors. However, it is difficult to generalize to the unseen entities that are not observed during training but appear during testing. Other methods use the powerful representational ability of pre-trained language models to learn entity descriptions and contextual representation of triples. Although they are robust to incompleteness, they need to calculate the score of all candidate entities for each triple during inference. We consider combining two models to enhance the robustness of unseen entities by semantic information, and prevent combined explosion by reducing inference overhead through structured information. We use a pre-training language model to code triples and learn the semantic information within them, and use a hyperbolic space-based distance model to learn structural information, then integrate the two types of information together. We evaluate our model by performing link prediction experiments on standard datasets. The experimental results show that our model achieves better performances than state-of-the-art methods on two standard datasets.\",\"PeriodicalId\":50701,\"journal\":{\"name\":\"Chinese Journal of Electronics\",\"volume\":\"33 6\",\"pages\":\"1412-1420\"},\"PeriodicalIF\":1.6000,\"publicationDate\":\"2024-11-11\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10748382\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Chinese Journal of Electronics\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10748382/\",\"RegionNum\":4,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"ENGINEERING, ELECTRICAL & ELECTRONIC\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Chinese Journal of Electronics","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10748382/","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
Knowledge Graph Completion Method of Combining Structural Information with Semantic Information
With the development of knowledge graphs, a series of applications based on knowledge graphs have emerged. The incompleteness of knowledge graphs makes the effect of the downstream applications affected by the quality of the knowledge graphs. To improve the quality of knowledge graphs, translation-based graph embeddings such as TransE, learn structural information by representing triples as low-dimensional dense vectors. However, it is difficult to generalize to the unseen entities that are not observed during training but appear during testing. Other methods use the powerful representational ability of pre-trained language models to learn entity descriptions and contextual representation of triples. Although they are robust to incompleteness, they need to calculate the score of all candidate entities for each triple during inference. We consider combining two models to enhance the robustness of unseen entities by semantic information, and prevent combined explosion by reducing inference overhead through structured information. We use a pre-training language model to code triples and learn the semantic information within them, and use a hyperbolic space-based distance model to learn structural information, then integrate the two types of information together. We evaluate our model by performing link prediction experiments on standard datasets. The experimental results show that our model achieves better performances than state-of-the-art methods on two standard datasets.
期刊介绍:
CJE focuses on the emerging fields of electronics, publishing innovative and transformative research papers. Most of the papers published in CJE are from universities and research institutes, presenting their innovative research results. Both theoretical and practical contributions are encouraged, and original research papers reporting novel solutions to the hot topics in electronics are strongly recommended.