On the Role of Morphological Information for Contextual Lemmatization

IF 9.3 2区 计算机科学 Computational Linguistics Pub Date : 2023-11-15 DOI:10.1162/coli_a_00497
Olia Toporkov, Rodrigo Agerri
{"title":"On the Role of Morphological Information for Contextual Lemmatization","authors":"Olia Toporkov, Rodrigo Agerri","doi":"10.1162/coli_a_00497","DOIUrl":null,"url":null,"abstract":"Lemmatization is a natural language processing (NLP) task which consists of producing, from a given inflected word, its canonical form or lemma. Lemmatization is one of the basic tasks that facilitate downstream NLP applications, and is of particular importance for high-inflected languages. Given that the process to obtain a lemma from an inflected word can be explained by looking at its morphosyntactic category, including fine-grained morphosyntactic information to train contextual lemmatizers has become common practice, without considering whether that is the optimum in terms of downstream performance. In order to address this issue, in this paper we empirically investigate the role of morphological information to develop contextual lemmatizers in six languages within a varied spectrum of morphological complexity: Basque, Turkish, Russian, Czech, Spanish and English. Furthermore, and unlike the vast majority of previous work, we also evaluate lemmatizers in out-of-domain settings, which constitutes, after all, their most common application use. The results of our study are rather surprising. It turns out that providing lemmatizers with fine-grained morphological features during training is not that beneficial, not even for agglutinative languages. In fact, modern contextual word representations seem to implicitly encode enough morphological information to obtain competitive contextual lemmatizers without seeing any explicit morphological signal. Moreover, our experiments suggest that the best lemmatizers out-of-domain are those using simple UPOS tags or those trained without morphology and, finally, that current evaluation practices for lemmatization are not adequate to clearly discriminate between models.","PeriodicalId":49089,"journal":{"name":"Computational Linguistics","volume":"21 10","pages":""},"PeriodicalIF":9.3000,"publicationDate":"2023-11-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computational Linguistics","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1162/coli_a_00497","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Lemmatization is a natural language processing (NLP) task which consists of producing, from a given inflected word, its canonical form or lemma. Lemmatization is one of the basic tasks that facilitate downstream NLP applications, and is of particular importance for high-inflected languages. Given that the process to obtain a lemma from an inflected word can be explained by looking at its morphosyntactic category, including fine-grained morphosyntactic information to train contextual lemmatizers has become common practice, without considering whether that is the optimum in terms of downstream performance. In order to address this issue, in this paper we empirically investigate the role of morphological information to develop contextual lemmatizers in six languages within a varied spectrum of morphological complexity: Basque, Turkish, Russian, Czech, Spanish and English. Furthermore, and unlike the vast majority of previous work, we also evaluate lemmatizers in out-of-domain settings, which constitutes, after all, their most common application use. The results of our study are rather surprising. It turns out that providing lemmatizers with fine-grained morphological features during training is not that beneficial, not even for agglutinative languages. In fact, modern contextual word representations seem to implicitly encode enough morphological information to obtain competitive contextual lemmatizers without seeing any explicit morphological signal. Moreover, our experiments suggest that the best lemmatizers out-of-domain are those using simple UPOS tags or those trained without morphology and, finally, that current evaluation practices for lemmatization are not adequate to clearly discriminate between models.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
论形态信息在语境词源化中的作用
引理化是一种自然语言处理(NLP)任务,它包括从一个给定的屈折词产生其规范形式或引理。词形化是促进下游NLP应用的基本任务之一,对于高屈折语言尤为重要。考虑到从屈折词中获得引理的过程可以通过查看其形态句法类别来解释,包括细粒度的形态句法信息来训练上下文引理器已经成为一种常见的做法,而不考虑这在下游性能方面是否是最佳的。为了解决这一问题,在本文中,我们实证研究了形态信息在六种不同形态复杂性语言(巴斯克语、土耳其语、俄语、捷克语、西班牙语和英语)中开发上下文词法的作用。此外,与之前的绝大多数工作不同,我们还在域外设置中评估归纳器,毕竟,这是它们最常见的应用。我们的研究结果相当令人吃惊。事实证明,在训练过程中为词源学提供细粒度的形态学特征并不是那么有益,即使对于粘合语言也是如此。事实上,现代语境词表征似乎隐式地编码了足够的形态信息,以获得竞争的语境词法,而无需看到任何明确的形态信号。此外,我们的实验表明,域外最好的词法归纳器是那些使用简单的UPOS标签或未经形态学训练的词法归纳器,最后,目前的词法归纳评估实践不足以明确区分模型。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
Computational Linguistics
Computational Linguistics Computer Science-Artificial Intelligence
自引率
0.00%
发文量
45
期刊介绍: Computational Linguistics is the longest-running publication devoted exclusively to the computational and mathematical properties of language and the design and analysis of natural language processing systems. This highly regarded quarterly offers university and industry linguists, computational linguists, artificial intelligence and machine learning investigators, cognitive scientists, speech specialists, and philosophers the latest information about the computational aspects of all the facets of research on language.
期刊最新文献
Dotless Arabic text for Natural Language Processing Humans Learn Language from Situated Communicative Interactions. What about Machines? Exploring temporal sensitivity in the brain using multi-timescale language models: an EEG decoding study Meaning beyond lexicality: Capturing Pseudoword Definitions with Language Models Perception of Phonological Assimilation by Neural Speech Recognition Models
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1