Linqin Cai, Lingjun Wang, Rongdi Yuan, Tingjie Lai
{"title":"基于元学习的动态自适应关系学习在少镜头知识图完成中的应用","authors":"Linqin Cai, Lingjun Wang, Rongdi Yuan, Tingjie Lai","doi":"10.1016/j.bdr.2023.100394","DOIUrl":null,"url":null,"abstract":"<div><p>As artificial intelligence<span> gradually steps into cognitive intelligence stage, knowledge graphs (KGs) play an increasingly important role in many natural language processing<span><span> tasks. Due to the prevalence of long-tail relations in KGs, few-shot knowledge graph completion (KGC) for link prediction of long-tail relations has gradually become a hot research topic. Current few-shot KGC methods mainly focus on the static representation of surrounding entities to explore the potential semantic features<span> of entities, while ignoring the dynamic properties among entities and the special influence of the long-tail relation on link prediction. In this paper, a new meta-learning based dynamic adaptive relation learning model (DARL) is proposed for few-shot KGC. For obtaining better semantic information of the meta knowledge, the proposed DARL model applies a dynamic neighbor encoder to incorporate neighbor relations into entity embedding. In addition, DARL builds </span></span>attention mechanism based fusion strategy for different attributes of the same relation to further enhance the relation-meta learning ability. We evaluate our DARL model on two public benchmark datasets NELL-One and WIKI-One for link prediction. Extensive experimental results indicate that our DARL outperforms the state-of-the-art models with an average relative improvement about 23.37%, 32.46% in MRR and Hits@1 on NELL-One, respectively.</span></span></p></div>","PeriodicalId":3,"journal":{"name":"ACS Applied Electronic Materials","volume":null,"pages":null},"PeriodicalIF":4.3000,"publicationDate":"2023-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Meta-Learning Based Dynamic Adaptive Relation Learning for Few-Shot Knowledge Graph Completion\",\"authors\":\"Linqin Cai, Lingjun Wang, Rongdi Yuan, Tingjie Lai\",\"doi\":\"10.1016/j.bdr.2023.100394\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>As artificial intelligence<span> gradually steps into cognitive intelligence stage, knowledge graphs (KGs) play an increasingly important role in many natural language processing<span><span> tasks. Due to the prevalence of long-tail relations in KGs, few-shot knowledge graph completion (KGC) for link prediction of long-tail relations has gradually become a hot research topic. Current few-shot KGC methods mainly focus on the static representation of surrounding entities to explore the potential semantic features<span> of entities, while ignoring the dynamic properties among entities and the special influence of the long-tail relation on link prediction. In this paper, a new meta-learning based dynamic adaptive relation learning model (DARL) is proposed for few-shot KGC. For obtaining better semantic information of the meta knowledge, the proposed DARL model applies a dynamic neighbor encoder to incorporate neighbor relations into entity embedding. In addition, DARL builds </span></span>attention mechanism based fusion strategy for different attributes of the same relation to further enhance the relation-meta learning ability. We evaluate our DARL model on two public benchmark datasets NELL-One and WIKI-One for link prediction. Extensive experimental results indicate that our DARL outperforms the state-of-the-art models with an average relative improvement about 23.37%, 32.46% in MRR and Hits@1 on NELL-One, respectively.</span></span></p></div>\",\"PeriodicalId\":3,\"journal\":{\"name\":\"ACS Applied Electronic Materials\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":4.3000,\"publicationDate\":\"2023-08-28\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"ACS Applied Electronic Materials\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S2214579623000278\",\"RegionNum\":3,\"RegionCategory\":\"材料科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"ENGINEERING, ELECTRICAL & ELECTRONIC\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"ACS Applied Electronic Materials","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2214579623000278","RegionNum":3,"RegionCategory":"材料科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
Meta-Learning Based Dynamic Adaptive Relation Learning for Few-Shot Knowledge Graph Completion
As artificial intelligence gradually steps into cognitive intelligence stage, knowledge graphs (KGs) play an increasingly important role in many natural language processing tasks. Due to the prevalence of long-tail relations in KGs, few-shot knowledge graph completion (KGC) for link prediction of long-tail relations has gradually become a hot research topic. Current few-shot KGC methods mainly focus on the static representation of surrounding entities to explore the potential semantic features of entities, while ignoring the dynamic properties among entities and the special influence of the long-tail relation on link prediction. In this paper, a new meta-learning based dynamic adaptive relation learning model (DARL) is proposed for few-shot KGC. For obtaining better semantic information of the meta knowledge, the proposed DARL model applies a dynamic neighbor encoder to incorporate neighbor relations into entity embedding. In addition, DARL builds attention mechanism based fusion strategy for different attributes of the same relation to further enhance the relation-meta learning ability. We evaluate our DARL model on two public benchmark datasets NELL-One and WIKI-One for link prediction. Extensive experimental results indicate that our DARL outperforms the state-of-the-art models with an average relative improvement about 23.37%, 32.46% in MRR and Hits@1 on NELL-One, respectively.