利用自然语言处理和微任务众包创建并验证学术知识图谱。

Q3 Engineering Dyna-Colombia Pub Date : 2024-01-01 Epub Date: 2023-04-05 DOI:10.1007/s00799-023-00360-7
Allard Oelen, Markus Stocker, Sören Auer
{"title":"利用自然语言处理和微任务众包创建并验证学术知识图谱。","authors":"Allard Oelen, Markus Stocker, Sören Auer","doi":"10.1007/s00799-023-00360-7","DOIUrl":null,"url":null,"abstract":"<p><p>Due to the growing number of scholarly publications, finding relevant articles becomes increasingly difficult. Scholarly knowledge graphs can be used to organize the scholarly knowledge presented within those publications and represent them in machine-readable formats. Natural language processing (NLP) provides scalable methods to automatically extract knowledge from articles and populate scholarly knowledge graphs. However, NLP extraction is generally not sufficiently accurate and, thus, fails to generate high granularity quality data. In this work, we present TinyGenius, a methodology to validate NLP-extracted scholarly knowledge statements using microtasks performed with crowdsourcing. TinyGenius is employed to populate a paper-centric knowledge graph, using five distinct NLP methods. We extend our previous work of the TinyGenius methodology in various ways. Specifically, we discuss the NLP tasks in more detail and include an explanation of the data model. Moreover, we present a user evaluation where participants validate the generated NLP statements. The results indicate that employing microtasks for statement validation is a promising approach despite the varying participant agreement for different microtasks.</p>","PeriodicalId":50565,"journal":{"name":"Dyna-Colombia","volume":"1 1","pages":"273-285"},"PeriodicalIF":0.0000,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11208198/pdf/","citationCount":"0","resultStr":"{\"title\":\"Creating and validating a scholarly knowledge graph using natural language processing and microtask crowdsourcing.\",\"authors\":\"Allard Oelen, Markus Stocker, Sören Auer\",\"doi\":\"10.1007/s00799-023-00360-7\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>Due to the growing number of scholarly publications, finding relevant articles becomes increasingly difficult. Scholarly knowledge graphs can be used to organize the scholarly knowledge presented within those publications and represent them in machine-readable formats. Natural language processing (NLP) provides scalable methods to automatically extract knowledge from articles and populate scholarly knowledge graphs. However, NLP extraction is generally not sufficiently accurate and, thus, fails to generate high granularity quality data. In this work, we present TinyGenius, a methodology to validate NLP-extracted scholarly knowledge statements using microtasks performed with crowdsourcing. TinyGenius is employed to populate a paper-centric knowledge graph, using five distinct NLP methods. We extend our previous work of the TinyGenius methodology in various ways. Specifically, we discuss the NLP tasks in more detail and include an explanation of the data model. Moreover, we present a user evaluation where participants validate the generated NLP statements. The results indicate that employing microtasks for statement validation is a promising approach despite the varying participant agreement for different microtasks.</p>\",\"PeriodicalId\":50565,\"journal\":{\"name\":\"Dyna-Colombia\",\"volume\":\"1 1\",\"pages\":\"273-285\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-01-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11208198/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Dyna-Colombia\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1007/s00799-023-00360-7\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"2023/4/5 0:00:00\",\"PubModel\":\"Epub\",\"JCR\":\"Q3\",\"JCRName\":\"Engineering\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Dyna-Colombia","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1007/s00799-023-00360-7","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2023/4/5 0:00:00","PubModel":"Epub","JCR":"Q3","JCRName":"Engineering","Score":null,"Total":0}
引用次数: 0

摘要

由于学术出版物越来越多,查找相关文章变得越来越困难。学术知识图谱可以用来组织这些出版物中的学术知识,并以机器可读的格式表示它们。自然语言处理(NLP)提供了从文章中自动提取知识并填充学术知识图谱的可扩展方法。然而,NLP 提取通常不够准确,因此无法生成高粒度的高质量数据。在这项工作中,我们介绍了 TinyGenius,这是一种利用众包执行的微型任务来验证 NLP 提取的学术知识语句的方法。TinyGenius 使用五种不同的 NLP 方法填充以论文为中心的知识图谱。我们以多种方式扩展了 TinyGenius 方法的前期工作。具体来说,我们更详细地讨论了 NLP 任务,并对数据模型进行了解释。此外,我们还进行了用户评估,让参与者验证生成的 NLP 语句。结果表明,尽管参与者对不同微任务的认同度不同,但采用微任务进行语句验证是一种很有前途的方法。
本文章由计算机程序翻译,如有差异,请以英文原文为准。

摘要图片

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Creating and validating a scholarly knowledge graph using natural language processing and microtask crowdsourcing.

Due to the growing number of scholarly publications, finding relevant articles becomes increasingly difficult. Scholarly knowledge graphs can be used to organize the scholarly knowledge presented within those publications and represent them in machine-readable formats. Natural language processing (NLP) provides scalable methods to automatically extract knowledge from articles and populate scholarly knowledge graphs. However, NLP extraction is generally not sufficiently accurate and, thus, fails to generate high granularity quality data. In this work, we present TinyGenius, a methodology to validate NLP-extracted scholarly knowledge statements using microtasks performed with crowdsourcing. TinyGenius is employed to populate a paper-centric knowledge graph, using five distinct NLP methods. We extend our previous work of the TinyGenius methodology in various ways. Specifically, we discuss the NLP tasks in more detail and include an explanation of the data model. Moreover, we present a user evaluation where participants validate the generated NLP statements. The results indicate that employing microtasks for statement validation is a promising approach despite the varying participant agreement for different microtasks.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Dyna-Colombia
Dyna-Colombia 工程技术-工程:综合
CiteScore
1.30
自引率
0.00%
发文量
0
审稿时长
4-8 weeks
期刊介绍: The DYNA journal, consistent with the aim of disseminating research in engineering, covers all disciplines within the large area of Engineering and Technology (OECD), through research articles, case studies and review articles resulting from work of national and international researchers.
期刊最新文献
Creating and validating a scholarly knowledge graph using natural language processing and microtask crowdsourcing. One Health and Engineering: using engineering to further pave the roadmap towards global health security, pandemic preparedness, and personalized medicine Perspectivas de las neurociencias y sus aplicaciones en las organizaciones Plataformas tecnológicas inteligentes al alcance de la agricultura a pequeña escala Criterios fundacionales de la revista DYNA
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1