{"title":"基于可追溯 LLM 的知识图谱语句验证","authors":"Daniel Adam, Tomáš Kliegr","doi":"arxiv-2409.07507","DOIUrl":null,"url":null,"abstract":"This article presents a method for verifying RDF triples using LLMs, with an\nemphasis on providing traceable arguments. Because the LLMs cannot currently\nreliably identify the origin of the information used to construct the response\nto the user query, our approach is to avoid using internal LLM factual\nknowledge altogether. Instead, verified RDF statements are compared to chunks\nof external documents retrieved through a web search or Wikipedia. To assess\nthe possible application of this workflow on biosciences content, we evaluated\n1,719 positive statements from the BioRED dataset and the same number of newly\ngenerated negative statements. The resulting precision is 88%, and recall is\n44%. This indicates that the method requires human oversight. We demonstrate\nthe method on Wikidata, where a SPARQL query is used to automatically retrieve\nstatements needing verification. Overall, the results suggest that LLMs could\nbe used for large-scale verification of statements in KGs, a task previously\nunfeasible due to human annotation costs.","PeriodicalId":501479,"journal":{"name":"arXiv - CS - Artificial Intelligence","volume":"96 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Traceable LLM-based validation of statements in knowledge graphs\",\"authors\":\"Daniel Adam, Tomáš Kliegr\",\"doi\":\"arxiv-2409.07507\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"This article presents a method for verifying RDF triples using LLMs, with an\\nemphasis on providing traceable arguments. Because the LLMs cannot currently\\nreliably identify the origin of the information used to construct the response\\nto the user query, our approach is to avoid using internal LLM factual\\nknowledge altogether. Instead, verified RDF statements are compared to chunks\\nof external documents retrieved through a web search or Wikipedia. To assess\\nthe possible application of this workflow on biosciences content, we evaluated\\n1,719 positive statements from the BioRED dataset and the same number of newly\\ngenerated negative statements. The resulting precision is 88%, and recall is\\n44%. This indicates that the method requires human oversight. We demonstrate\\nthe method on Wikidata, where a SPARQL query is used to automatically retrieve\\nstatements needing verification. Overall, the results suggest that LLMs could\\nbe used for large-scale verification of statements in KGs, a task previously\\nunfeasible due to human annotation costs.\",\"PeriodicalId\":501479,\"journal\":{\"name\":\"arXiv - CS - Artificial Intelligence\",\"volume\":\"96 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-09-11\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - CS - Artificial Intelligence\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2409.07507\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Artificial Intelligence","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.07507","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Traceable LLM-based validation of statements in knowledge graphs
This article presents a method for verifying RDF triples using LLMs, with an
emphasis on providing traceable arguments. Because the LLMs cannot currently
reliably identify the origin of the information used to construct the response
to the user query, our approach is to avoid using internal LLM factual
knowledge altogether. Instead, verified RDF statements are compared to chunks
of external documents retrieved through a web search or Wikipedia. To assess
the possible application of this workflow on biosciences content, we evaluated
1,719 positive statements from the BioRED dataset and the same number of newly
generated negative statements. The resulting precision is 88%, and recall is
44%. This indicates that the method requires human oversight. We demonstrate
the method on Wikidata, where a SPARQL query is used to automatically retrieve
statements needing verification. Overall, the results suggest that LLMs could
be used for large-scale verification of statements in KGs, a task previously
unfeasible due to human annotation costs.