{"title":"用于孟加拉语词义消歧的修正 lesk 算法","authors":"Ratul Das, Alok Ranjan Pal, Diganta Saha","doi":"10.1007/s12046-024-02495-y","DOIUrl":null,"url":null,"abstract":"<p>This article presents a novel approach towards solving the problem of Word Sense Disambiguation (WSD) for Bengali Text. The algorithm used in this work is a modification of Lesk Algorithm. In the original algorithm, the overlap between the “context bag” and the “sense bag” items from the lexical resource (WordNet) are calculated using word pair matching. In the current approach the overlap is calculated by adopting semantic similarity measure using the fastText subword embeddings. The approach can efficiently handle unknown wordforms and discover the latent semantics of words. Significant progress has been made in WSD for English and other European Languages. Indian languages like Bengali still pose a formidable challenge. The dataset used for the work is individual sentences from the Bengali Wikipedia which is a huge collection of Bengali text ( 96 K Webpages with 1700 K sentences), the Indo WordNet for Bengali language and Bengali Online Dictionary. The results of the experiments performed are promising. The target words which have semantically distinct synsets in the WordNet give a high F1 score. The F1 score achieved is 80% which is well over the baseline and shows significant improvement over the other knowledge-based approaches tried on low resource Indian languages.</p>","PeriodicalId":21498,"journal":{"name":"Sādhanā","volume":"15 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Modified lesk algorithm for word sense disambiguation in Bengali\",\"authors\":\"Ratul Das, Alok Ranjan Pal, Diganta Saha\",\"doi\":\"10.1007/s12046-024-02495-y\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>This article presents a novel approach towards solving the problem of Word Sense Disambiguation (WSD) for Bengali Text. The algorithm used in this work is a modification of Lesk Algorithm. In the original algorithm, the overlap between the “context bag” and the “sense bag” items from the lexical resource (WordNet) are calculated using word pair matching. In the current approach the overlap is calculated by adopting semantic similarity measure using the fastText subword embeddings. The approach can efficiently handle unknown wordforms and discover the latent semantics of words. Significant progress has been made in WSD for English and other European Languages. Indian languages like Bengali still pose a formidable challenge. The dataset used for the work is individual sentences from the Bengali Wikipedia which is a huge collection of Bengali text ( 96 K Webpages with 1700 K sentences), the Indo WordNet for Bengali language and Bengali Online Dictionary. The results of the experiments performed are promising. The target words which have semantically distinct synsets in the WordNet give a high F1 score. The F1 score achieved is 80% which is well over the baseline and shows significant improvement over the other knowledge-based approaches tried on low resource Indian languages.</p>\",\"PeriodicalId\":21498,\"journal\":{\"name\":\"Sādhanā\",\"volume\":\"15 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-05-30\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Sādhanā\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1007/s12046-024-02495-y\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Sādhanā","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1007/s12046-024-02495-y","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
摘要
本文介绍了一种解决孟加拉语文本词义消歧(WSD)问题的新方法。这项工作中使用的算法是 Lesk 算法的改进版。在原始算法中,"上下文袋 "和 "词义袋 "项目之间的重叠是通过词对匹配从词汇资源(WordNet)中计算出来的。在当前的方法中,重叠度是通过使用 fastText 子词嵌入的语义相似性度量来计算的。这种方法可以有效地处理未知词形,并发现词的潜在语义。针对英语和其他欧洲语言的 WSD 取得了重大进展。像孟加拉语这样的印度语言仍然是一个巨大的挑战。这项工作使用的数据集是孟加拉语维基百科中的单个句子,这是一个庞大的孟加拉语文本集合(96 K 个网页,1700 K 个句子)、孟加拉语 Indo WordNet 和孟加拉语在线词典。实验结果令人鼓舞。在 WordNet 中具有不同语义合成集的目标词可获得较高的 F1 分数。达到的 F1 分数为 80%,远高于基线分数,与在低资源印度语言上尝试过的其他基于知识的方法相比有了显著提高。
Modified lesk algorithm for word sense disambiguation in Bengali
This article presents a novel approach towards solving the problem of Word Sense Disambiguation (WSD) for Bengali Text. The algorithm used in this work is a modification of Lesk Algorithm. In the original algorithm, the overlap between the “context bag” and the “sense bag” items from the lexical resource (WordNet) are calculated using word pair matching. In the current approach the overlap is calculated by adopting semantic similarity measure using the fastText subword embeddings. The approach can efficiently handle unknown wordforms and discover the latent semantics of words. Significant progress has been made in WSD for English and other European Languages. Indian languages like Bengali still pose a formidable challenge. The dataset used for the work is individual sentences from the Bengali Wikipedia which is a huge collection of Bengali text ( 96 K Webpages with 1700 K sentences), the Indo WordNet for Bengali language and Bengali Online Dictionary. The results of the experiments performed are promising. The target words which have semantically distinct synsets in the WordNet give a high F1 score. The F1 score achieved is 80% which is well over the baseline and shows significant improvement over the other knowledge-based approaches tried on low resource Indian languages.