MultiCoNER: A Large-scale Multilingual Dataset for Complex Named Entity Recognition

S. Malmasi, Anjie Fang, B. Fetahu, Sudipta Kar, Oleg Rokhlenko
{"title":"MultiCoNER: A Large-scale Multilingual Dataset for Complex Named Entity Recognition","authors":"S. Malmasi, Anjie Fang, B. Fetahu, Sudipta Kar, Oleg Rokhlenko","doi":"10.48550/arXiv.2208.14536","DOIUrl":null,"url":null,"abstract":"We present AnonData, a large multilingual dataset for Named Entity Recognition that covers 3 domains (Wiki sentences, questions, and search queries) across 11 languages, as well as multilingual and code-mixing subsets. This dataset is designed to represent contemporary challenges in NER, including low-context scenarios (short and uncased text), syntactically complex entities like movie titles, and long-tail entity distributions. The 26M token dataset is compiled from public resources using techniques such as heuristic-based sentence sampling, template extraction and slotting, and machine translation. We tested the performance of two NER models on our dataset: a baseline XLM-RoBERTa model, and a state-of-the-art NER GEMNET model that leverages gazetteers. The baseline achieves moderate performance (macro-F1=54%). GEMNET, which uses gazetteers, improvement significantly (average improvement of macro-F1=+30%) and demonstrates the difficulty of our dataset. AnonData poses challenges even for large pre-trained language models, and we believe that it can help further research in building robust NER systems.","PeriodicalId":91381,"journal":{"name":"Proceedings of COLING. International Conference on Computational Linguistics","volume":"71 1","pages":"3798-3809"},"PeriodicalIF":0.0000,"publicationDate":"2022-08-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"51","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of COLING. International Conference on Computational Linguistics","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.48550/arXiv.2208.14536","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 51

Abstract

We present AnonData, a large multilingual dataset for Named Entity Recognition that covers 3 domains (Wiki sentences, questions, and search queries) across 11 languages, as well as multilingual and code-mixing subsets. This dataset is designed to represent contemporary challenges in NER, including low-context scenarios (short and uncased text), syntactically complex entities like movie titles, and long-tail entity distributions. The 26M token dataset is compiled from public resources using techniques such as heuristic-based sentence sampling, template extraction and slotting, and machine translation. We tested the performance of two NER models on our dataset: a baseline XLM-RoBERTa model, and a state-of-the-art NER GEMNET model that leverages gazetteers. The baseline achieves moderate performance (macro-F1=54%). GEMNET, which uses gazetteers, improvement significantly (average improvement of macro-F1=+30%) and demonstrates the difficulty of our dataset. AnonData poses challenges even for large pre-trained language models, and we believe that it can help further research in building robust NER systems.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
MultiCoNER:用于复杂命名实体识别的大规模多语言数据集
我们提出了AnonData,一个用于命名实体识别的大型多语言数据集,涵盖了3个领域(Wiki句子,问题和搜索查询),跨越11种语言,以及多语言和代码混合子集。该数据集旨在表示NER中的当代挑战,包括低上下文场景(短文本和无大小写文本)、语法复杂的实体(如电影标题)和长尾实体分布。26M令牌数据集是从公共资源中使用启发式句子采样、模板提取和插槽以及机器翻译等技术编译而成的。我们在数据集上测试了两个NER模型的性能:一个是基线XLM-RoBERTa模型,另一个是利用地名词典的最先进的NER GEMNET模型。基线达到中等性能(macro-F1=54%)。GEMNET使用了词典,显著提高了(macro-F1的平均提高=+30%),并证明了我们数据集的难度。AnonData甚至对大型预训练语言模型也提出了挑战,我们相信它可以帮助进一步研究构建健壮的NER系统。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Modeling Hierarchical Reasoning Chains by Linking Discourse Units and Key Phrases for Reading Comprehension Event Causality Extraction with Event Argument Correlations BERT-Flow-VAE: A Weakly-supervised Model for Multi-Label Text Classification TestAug: A Framework for Augmenting Capability-based NLP Tests Multilingual Word Sense Disambiguation with Unified Sense Representation
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1