Improving Extraction of Chinese Open Relations Using Pre-trained Language Model and Knowledge Enhancement

IF 1.3 3区 计算机科学 Q3 COMPUTER SCIENCE, INFORMATION SYSTEMS Data Intelligence Pub Date : 2023-11-07 DOI:10.1162/dint_a_00227
Chaojie Wen, Xudong Jia, Tao Chen
{"title":"Improving Extraction of Chinese Open Relations Using Pre-trained Language Model and Knowledge Enhancement","authors":"Chaojie Wen, Xudong Jia, Tao Chen","doi":"10.1162/dint_a_00227","DOIUrl":null,"url":null,"abstract":"Abstract Open Relation Extraction (ORE) is a task of extracting semantic relations from a text document. Current ORE systems have significantly improved their efficiency in obtaining Chinese relations, when compared with conventional systems which heavily depend on feature engineering or syntactic parsing. However, the ORE systems do not use robust neural networks such as pre-trained language models to take advantage of large-scale unstructured data effectively. In respons to this issue, a new system entitled Chinese Open Relation Extraction with Knowledge Enhancement (CORE-KE) is presented in this paper. The CORE-KE system employs a pre-trained language model (with the support of a Bidirectional Long Short-Term Memory (BiLSTM) layer and a Masked Conditional Random Field (Masked CRF) layer) on unstructured data in order to improve Chinese open relation extraction. Entity descriptions in Wikidata and additional knowledge (in terms of triple facts) extracted from Chinese ORE datasets are used to fine-tune the pre-trained language model. In addition, syntactic features are further adopted in the training stage of the CORE-KE system for knowledge enhancement. Experimental results of the CORE-KE system on two large-scale datasets of open Chinese entities and relations demonstrate that the CORE-KE system is superior to other ORE systems. The F1-scores of the CORE-KE system on the two datasets have given a relative improvement of 20.1% and 1.3%, when compared with benchmark ORE systems, respectively. The source code is available at https://github.com/cjwen15/CORE-KE.","PeriodicalId":34023,"journal":{"name":"Data Intelligence","volume":"50 7","pages":"0"},"PeriodicalIF":1.3000,"publicationDate":"2023-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Data Intelligence","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1162/dint_a_00227","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0

Abstract

Abstract Open Relation Extraction (ORE) is a task of extracting semantic relations from a text document. Current ORE systems have significantly improved their efficiency in obtaining Chinese relations, when compared with conventional systems which heavily depend on feature engineering or syntactic parsing. However, the ORE systems do not use robust neural networks such as pre-trained language models to take advantage of large-scale unstructured data effectively. In respons to this issue, a new system entitled Chinese Open Relation Extraction with Knowledge Enhancement (CORE-KE) is presented in this paper. The CORE-KE system employs a pre-trained language model (with the support of a Bidirectional Long Short-Term Memory (BiLSTM) layer and a Masked Conditional Random Field (Masked CRF) layer) on unstructured data in order to improve Chinese open relation extraction. Entity descriptions in Wikidata and additional knowledge (in terms of triple facts) extracted from Chinese ORE datasets are used to fine-tune the pre-trained language model. In addition, syntactic features are further adopted in the training stage of the CORE-KE system for knowledge enhancement. Experimental results of the CORE-KE system on two large-scale datasets of open Chinese entities and relations demonstrate that the CORE-KE system is superior to other ORE systems. The F1-scores of the CORE-KE system on the two datasets have given a relative improvement of 20.1% and 1.3%, when compared with benchmark ORE systems, respectively. The source code is available at https://github.com/cjwen15/CORE-KE.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
利用预训练语言模型和知识增强改进中文开放关系的提取
开放关系抽取(Open Relation Extraction, ORE)是一种从文本文档中抽取语义关系的任务。与传统的依赖特征工程或句法分析的系统相比,现有的ORE系统在获取中文关系方面的效率有了显著提高。然而,ORE系统没有使用鲁棒神经网络(如预训练语言模型)来有效地利用大规模非结构化数据。针对这一问题,本文提出了一个基于知识增强的中文开放关系抽取系统(CORE-KE)。CORE-KE系统在非结构化数据上采用预先训练的语言模型(支持双向长短期记忆(BiLSTM)层和屏蔽条件随机场(masking Conditional Random Field)层),以改进中文开放关系的提取。使用维基数据中的实体描述和从中文ORE数据集中提取的附加知识(就三重事实而言)来微调预训练的语言模型。此外,在CORE-KE系统的训练阶段进一步采用句法特征进行知识增强。在开放中文实体和关系两个大型数据集上的实验结果表明,CORE-KE系统优于其他ORE系统。与基准ORE系统相比,CORE-KE系统在这两个数据集上的f1分数分别提高了20.1%和1.3%。源代码可从https://github.com/cjwen15/CORE-KE获得。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
Data Intelligence
Data Intelligence COMPUTER SCIENCE, INFORMATION SYSTEMS-
CiteScore
6.50
自引率
15.40%
发文量
40
审稿时长
8 weeks
期刊最新文献
The Limitations and Ethical Considerations of ChatGPT Rule Mining Trends from 1987 to 2022: A Bibliometric Analysis and Visualization Classification and quantification of timestamp data quality issues and its impact on data quality outcome BIKAS: Bio-Inspired Knowledge Acquisition and Simulacrum—A Knowledge Database to Support Multifunctional Design Concept Generation Exploring Attentive Siamese LSTM for Low-Resource Text Plagiarism Detection
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1