Large language models can better understand knowledge graphs than we thought

IF 7.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Knowledge-Based Systems Pub Date : 2025-03-15 Epub Date: 2025-02-15 DOI:10.1016/j.knosys.2025.113060
Xinbang Dai , Yuncheng Hua , Tongtong Wu , Yang Sheng , Qiu Ji , Guilin Qi
{"title":"Large language models can better understand knowledge graphs than we thought","authors":"Xinbang Dai ,&nbsp;Yuncheng Hua ,&nbsp;Tongtong Wu ,&nbsp;Yang Sheng ,&nbsp;Qiu Ji ,&nbsp;Guilin Qi","doi":"10.1016/j.knosys.2025.113060","DOIUrl":null,"url":null,"abstract":"<div><div>When we integrate factual knowledge from knowledge graphs (KGs) into large language models (LLMs) to enhance their performance, the cost of injection through training increases with the scale of the models. Consequently, there is significant interest in developing prompt strategies that effectively incorporate KG information into LLMs. However, the community has not yet comprehensively understood how LLMs process and interpret KG information in different input formats and organizations within prompts, and researchers often rely on trial and error. To address this gap, we design extensive experiments to empirically study LLMs’ comprehension of different KG prompts. At the literal level, we reveal LLMs’ preferences for various input formats (from linearized triples to fluent natural language text). At the attention distribution level, we discuss the underlying mechanisms driving these preferences. We then investigate how the organization of structured knowledge impacts LLMs and evaluate LLMs’ robustness in processing and utilizing KG information in practical scenarios. Our experiments show that (1) linearized triples are more effective than fluent NL text in helping LLMs understand KG information and answer fact-intensive questions; (2) Different LLMs exhibit varying preferences for different organizational formats of triples; (3) LLMs with larger scales are more susceptible to noisy, incomplete subgraphs.</div></div>","PeriodicalId":49939,"journal":{"name":"Knowledge-Based Systems","volume":"312 ","pages":"Article 113060"},"PeriodicalIF":7.6000,"publicationDate":"2025-03-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Knowledge-Based Systems","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0950705125001078","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/2/15 0:00:00","PubModel":"Epub","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

When we integrate factual knowledge from knowledge graphs (KGs) into large language models (LLMs) to enhance their performance, the cost of injection through training increases with the scale of the models. Consequently, there is significant interest in developing prompt strategies that effectively incorporate KG information into LLMs. However, the community has not yet comprehensively understood how LLMs process and interpret KG information in different input formats and organizations within prompts, and researchers often rely on trial and error. To address this gap, we design extensive experiments to empirically study LLMs’ comprehension of different KG prompts. At the literal level, we reveal LLMs’ preferences for various input formats (from linearized triples to fluent natural language text). At the attention distribution level, we discuss the underlying mechanisms driving these preferences. We then investigate how the organization of structured knowledge impacts LLMs and evaluate LLMs’ robustness in processing and utilizing KG information in practical scenarios. Our experiments show that (1) linearized triples are more effective than fluent NL text in helping LLMs understand KG information and answer fact-intensive questions; (2) Different LLMs exhibit varying preferences for different organizational formats of triples; (3) LLMs with larger scales are more susceptible to noisy, incomplete subgraphs.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
大型语言模型比我们想象的更能理解知识图
当我们将知识图(KGs)中的事实知识整合到大型语言模型(llm)中以提高其性能时,通过训练注入的成本随着模型的规模而增加。因此,人们对制定有效地将KG信息纳入llm的快速策略非常感兴趣。然而,社区尚未全面了解法学硕士如何处理和解释不同输入格式和提示组织中的KG信息,研究人员经常依赖于试错。为了解决这一差距,我们设计了广泛的实验来实证研究法学硕士对不同KG提示的理解。在文字层面,我们揭示了法学硕士对各种输入格式的偏好(从线性三元组到流畅的自然语言文本)。在注意力分配层面,我们讨论了驱动这些偏好的潜在机制。然后,我们研究结构化知识的组织如何影响法学硕士,并评估法学硕士在实际场景中处理和利用KG信息的鲁棒性。我们的实验表明:(1)在帮助法学硕士理解KG信息和回答事实密集型问题方面,线性化三元组比流利的NL文本更有效;(2)不同法学硕士对三元组不同组织形式的偏好存在差异;(3)规模较大的llm更容易受到噪声和不完整子图的影响。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
Knowledge-Based Systems
Knowledge-Based Systems 工程技术-计算机:人工智能
CiteScore
14.80
自引率
12.50%
发文量
1245
审稿时长
7.8 months
期刊介绍: Knowledge-Based Systems, an international and interdisciplinary journal in artificial intelligence, publishes original, innovative, and creative research results in the field. It focuses on knowledge-based and other artificial intelligence techniques-based systems. The journal aims to support human prediction and decision-making through data science and computation techniques, provide a balanced coverage of theory and practical study, and encourage the development and implementation of knowledge-based intelligence models, methods, systems, and software tools. Applications in business, government, education, engineering, and healthcare are emphasized.
期刊最新文献
Revisiting the role of linguistic knowledge in large language models through prompting ACO–PAL: A prior-Aware learning framework for local path planning in complex environments LLM-enabled universal traffic signal control across different intersections and traffic flows Multi-view semi-supervised classification via innovative graph construction and smoothness-aware graph convolution Galio: Defending ownership of AI-generated images against content-preserving tampering
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1