利用 LLM 构建历史文献真实性评估结构

Andrea Schimmenti, Valentina Pasqual, Francesca Tomasi, Fabio Vitali, Marieke van Erp
{"title":"利用 LLM 构建历史文献真实性评估结构","authors":"Andrea Schimmenti, Valentina Pasqual, Francesca Tomasi, Fabio Vitali, Marieke van Erp","doi":"arxiv-2407.09290","DOIUrl":null,"url":null,"abstract":"Given the wide use of forgery throughout history, scholars have and are\ncontinuously engaged in assessing the authenticity of historical documents.\nHowever, online catalogues merely offer descriptive metadata for these\ndocuments, relegating discussions about their authenticity to free-text\nformats, making it difficult to study these assessments at scale. This study\nexplores the generation of structured data about documents' authenticity\nassessment from natural language texts. Our pipeline exploits Large Language\nModels (LLMs) to select, extract and classify relevant claims about the topic\nwithout the need for training, and Semantic Web technologies to structure and\ntype-validate the LLM's results. The final output is a catalogue of documents\nwhose authenticity has been debated, along with scholars' opinions on their\nauthenticity. This process can serve as a valuable resource for integration\ninto catalogues, allowing room for more intricate queries and analyses on the\nevolution of these debates over centuries.","PeriodicalId":501285,"journal":{"name":"arXiv - CS - Digital Libraries","volume":"13 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-07-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Structuring Authenticity Assessments on Historical Documents using LLMs\",\"authors\":\"Andrea Schimmenti, Valentina Pasqual, Francesca Tomasi, Fabio Vitali, Marieke van Erp\",\"doi\":\"arxiv-2407.09290\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Given the wide use of forgery throughout history, scholars have and are\\ncontinuously engaged in assessing the authenticity of historical documents.\\nHowever, online catalogues merely offer descriptive metadata for these\\ndocuments, relegating discussions about their authenticity to free-text\\nformats, making it difficult to study these assessments at scale. This study\\nexplores the generation of structured data about documents' authenticity\\nassessment from natural language texts. Our pipeline exploits Large Language\\nModels (LLMs) to select, extract and classify relevant claims about the topic\\nwithout the need for training, and Semantic Web technologies to structure and\\ntype-validate the LLM's results. The final output is a catalogue of documents\\nwhose authenticity has been debated, along with scholars' opinions on their\\nauthenticity. This process can serve as a valuable resource for integration\\ninto catalogues, allowing room for more intricate queries and analyses on the\\nevolution of these debates over centuries.\",\"PeriodicalId\":501285,\"journal\":{\"name\":\"arXiv - CS - Digital Libraries\",\"volume\":\"13 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-07-12\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - CS - Digital Libraries\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2407.09290\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Digital Libraries","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2407.09290","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

然而,在线目录仅仅提供了这些文献的描述性元数据,将有关其真实性的讨论归结为自由文本格式,从而难以对这些评估进行大规模研究。本研究探索从自然语言文本中生成有关文档真实性评估的结构化数据。我们的管道利用大型语言模型(Large LanguageModels,LLM)来选择、提取和分类有关主题的相关说法,而无需进行训练,并利用语义网技术来构建和类型验证 LLM 的结果。最终的输出结果是一个文件目录,其中包含了其真实性曾引起争议的文件,以及学者们对其真实性的看法。这一过程可以作为整合到目录中的宝贵资源,为更复杂的查询和分析几个世纪以来这些争论的演变提供空间。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Structuring Authenticity Assessments on Historical Documents using LLMs
Given the wide use of forgery throughout history, scholars have and are continuously engaged in assessing the authenticity of historical documents. However, online catalogues merely offer descriptive metadata for these documents, relegating discussions about their authenticity to free-text formats, making it difficult to study these assessments at scale. This study explores the generation of structured data about documents' authenticity assessment from natural language texts. Our pipeline exploits Large Language Models (LLMs) to select, extract and classify relevant claims about the topic without the need for training, and Semantic Web technologies to structure and type-validate the LLM's results. The final output is a catalogue of documents whose authenticity has been debated, along with scholars' opinions on their authenticity. This process can serve as a valuable resource for integration into catalogues, allowing room for more intricate queries and analyses on the evolution of these debates over centuries.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Publishing Instincts: An Exploration-Exploitation Framework for Studying Academic Publishing Behavior and "Home Venues" Research Citations Building Trust in Wikipedia Evaluating the Linguistic Coverage of OpenAlex: An Assessment of Metadata Accuracy and Completeness Towards understanding evolution of science through language model series Ensuring Adherence to Standards in Experiment-Related Metadata Entered Via Spreadsheets
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1