{"title":"Chinese Named Entity Recognition based on BERT-Transformer-BiLSTM-CRF Model","authors":"Yong Gan, R. Yang, Chenfang Zhang, Dongwei Jia","doi":"10.1109/ISSSR53171.2021.00029","DOIUrl":null,"url":null,"abstract":"Among many named entity recognition modes in natural languages, most of the processing in the text preprocessing stage only pays attention to the vector representation of single words and characters, and seldom pays attention to the semantic relationship in the text. In the language text information, there are many pronouns and polysemous words, which makes the problem of polysemous words appear in the text preprocessing stage. Based on this problem, this paper adopts a Chinese named entity recognition method based on the BERT-Transformer-BiLSTM-CRF model. First, use the pre-trained BERT model in a large-scale corpus to dynamically generate a sequence of word vectors according to its input context, then use the Transformer encoder to model the contextual long-distance semantic features of the text, and use the BiLSTM model to perform sentence context features Extract, and finally input the feature vector sequence into CRF (Conditional Random Field) to get the final prediction result. Tested on the public MSRA Chinese corpus. Experimental results on the corpus show that the model has improved accuracy, recall and F1 value than most models.","PeriodicalId":211012,"journal":{"name":"2021 7th International Symposium on System and Software Reliability (ISSSR)","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2021-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 7th International Symposium on System and Software Reliability (ISSSR)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ISSSR53171.2021.00029","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3
Abstract
Among many named entity recognition modes in natural languages, most of the processing in the text preprocessing stage only pays attention to the vector representation of single words and characters, and seldom pays attention to the semantic relationship in the text. In the language text information, there are many pronouns and polysemous words, which makes the problem of polysemous words appear in the text preprocessing stage. Based on this problem, this paper adopts a Chinese named entity recognition method based on the BERT-Transformer-BiLSTM-CRF model. First, use the pre-trained BERT model in a large-scale corpus to dynamically generate a sequence of word vectors according to its input context, then use the Transformer encoder to model the contextual long-distance semantic features of the text, and use the BiLSTM model to perform sentence context features Extract, and finally input the feature vector sequence into CRF (Conditional Random Field) to get the final prediction result. Tested on the public MSRA Chinese corpus. Experimental results on the corpus show that the model has improved accuracy, recall and F1 value than most models.
在自然语言的众多命名实体识别模式中,文本预处理阶段的处理大多只关注单个单词和字符的向量表示,很少关注文本中的语义关系。在语言文本信息中,存在着大量的代词和多义词,这使得多义词问题在文本预处理阶段就出现了。针对这一问题,本文采用了一种基于BERT-Transformer-BiLSTM-CRF模型的中文命名实体识别方法。首先,在大规模语料库中使用预训练的BERT模型根据其输入上下文动态生成词向量序列,然后使用Transformer编码器对文本的上下文远距离语义特征进行建模,并使用BiLSTM模型进行句子上下文特征提取,最后将特征向量序列输入CRF (Conditional Random Field)得到最终预测结果。在公开的MSRA中文语料库上进行了测试。在语料库上的实验结果表明,与大多数模型相比,该模型在准确率、召回率和F1值上都有提高。