首页 > 最新文献

Language Resources and Evaluation最新文献

英文 中文
NILC-Metrix: assessing the complexity of written and spoken language in Brazilian Portuguese NILC-Metrix:评估巴西葡萄牙语书面和口头语言的复杂性
3区 计算机科学 Q3 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2023-10-17 DOI: 10.1007/s10579-023-09693-w
Sidney Evaldo Leal, Magali Sanches Duran, Carolina Evaristo Scarton, Nathan Siegle Hartmann, Sandra Maria Aluísio
The objective of this paper is to present and make publicly available the NILC-Metrix, a computational system comprising 200 metrics proposed in studies on discourse, psycholinguistics, cognitive and computational linguistics, to assess textual complexity in Brazilian Portuguese (BP). The metrics are relevant for descriptive analysis and the creation of computational models and can be used to extract information from various linguistic levels of written and spoken language. The metrics were developed during the last 13 years, starting in the end of 2007, within the scope of the PorSimples project. Once the PorSimples finished, new metrics were added to the initial 48 metrics of the Coh-Metrix-Port tool. Coh-Metrix-Port adapted some metrics to BP from the Coh-Metrix tool that computes metrics related to cohesion and coherence of texts in English. Given the large number of metrics, we present them following an organisation similar to the metrics of Coh-Metrix v3.0 to facilitate comparisons made with metrics in Portuguese and English, in future studies using both tools. In this paper, we illustrate the potential of the NILC-Metrix by presenting three applications: (i) a descriptive analysis of the differences between children’s film subtitles and texts written for Elementary School I (comprises classes from 1st to 5th grade) and II (Final Years) (comprises classes from 6th to 9th grade, in an age group that corresponds to the transition between childhood and adolescence); (ii) a new predictor of textual complexity for the corpus of original and simplified texts of the PorSimples project; (iii) a complexity prediction model for school grades, using transcripts of children’s story narratives told by teenagers. For each application, we evaluate which groups of metrics are more discriminative, showing their contribution for each task.
本文的目的是展示并公开NILC-Metrix,这是一个计算系统,包括在话语、心理语言学、认知语言学和计算语言学的研究中提出的200个指标,用于评估巴西葡萄牙语(BP)的文本复杂性。这些度量标准与描述性分析和计算模型的创建相关,并可用于从书面和口头语言的各种语言水平中提取信息。从2007年底开始,在PorSimples项目的范围内,这些度量标准是在过去13年里开发出来的。一旦PorSimples完成,新的指标被添加到Coh-Metrix-Port工具的初始48个指标中。Coh-Metrix- port将Coh-Metrix工具中的一些度量标准改编为BP,该工具用于计算英语文本的衔接和连贯相关的度量。考虑到大量的指标,我们按照类似于Coh-Metrix v3.0的指标组织来呈现它们,以便在未来使用这两种工具的研究中与葡萄牙语和英语的指标进行比较。在本文中,我们通过提出三个应用来说明NILC-Metrix的潜力:(i)对小学一年级(包括一年级到五年级的班级)和小学二年级(最后几年)(包括六年级到九年级的班级,对应于儿童和青少年之间的过渡年龄段)儿童电影字幕和文本之间的差异进行描述性分析;(ii) PorSimples项目的原始和简化文本语料库的文本复杂性的新预测器;(iii)利用青少年的儿童故事叙述文本,建立学校成绩的复杂性预测模型。对于每个应用程序,我们评估哪一组指标更具判别性,显示它们对每个任务的贡献。
{"title":"NILC-Metrix: assessing the complexity of written and spoken language in Brazilian Portuguese","authors":"Sidney Evaldo Leal, Magali Sanches Duran, Carolina Evaristo Scarton, Nathan Siegle Hartmann, Sandra Maria Aluísio","doi":"10.1007/s10579-023-09693-w","DOIUrl":"https://doi.org/10.1007/s10579-023-09693-w","url":null,"abstract":"The objective of this paper is to present and make publicly available the NILC-Metrix, a computational system comprising 200 metrics proposed in studies on discourse, psycholinguistics, cognitive and computational linguistics, to assess textual complexity in Brazilian Portuguese (BP). The metrics are relevant for descriptive analysis and the creation of computational models and can be used to extract information from various linguistic levels of written and spoken language. The metrics were developed during the last 13 years, starting in the end of 2007, within the scope of the PorSimples project. Once the PorSimples finished, new metrics were added to the initial 48 metrics of the Coh-Metrix-Port tool. Coh-Metrix-Port adapted some metrics to BP from the Coh-Metrix tool that computes metrics related to cohesion and coherence of texts in English. Given the large number of metrics, we present them following an organisation similar to the metrics of Coh-Metrix v3.0 to facilitate comparisons made with metrics in Portuguese and English, in future studies using both tools. In this paper, we illustrate the potential of the NILC-Metrix by presenting three applications: (i) a descriptive analysis of the differences between children’s film subtitles and texts written for Elementary School I (comprises classes from 1st to 5th grade) and II (Final Years) (comprises classes from 6th to 9th grade, in an age group that corresponds to the transition between childhood and adolescence); (ii) a new predictor of textual complexity for the corpus of original and simplified texts of the PorSimples project; (iii) a complexity prediction model for school grades, using transcripts of children’s story narratives told by teenagers. For each application, we evaluate which groups of metrics are more discriminative, showing their contribution for each task.","PeriodicalId":49927,"journal":{"name":"Language Resources and Evaluation","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136033230","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
A semi-supervised method to generate a persian dataset for suggestion classification 一种用于建议分类生成波斯语数据集的半监督方法
3区 计算机科学 Q3 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2023-09-29 DOI: 10.1007/s10579-023-09688-7
Leila Safari, Zanyar Mohammady
Suggestion mining has become a popular subject in the field of natural language processing (NLP) that is useful in areas like a service/product improvement. The purpose of this study is to provide an automated machine learning (ML) based approach to extract suggestions from Persian text. In this research, first, a novel two-step semi-supervised method has been proposed to generate a Persian dataset called ParsSugg, which is then used in the automatic classification of the user’s suggestions. The first step is manual labeling of data based on a proposed guideline, followed by a data augmentation phase. In the second step, using pre-trained Persian Bidirectional Encoder Representations from Transformers (ParsBERT) as a classifier and the data from the previous step, more data were labeled. The performance of various ML models, including Support Vector Machine (SVM), Random Forest (RF), Convolutional Neural Networks (CNN), Long Short Term Memory (LSTM), and the ParsBERT language model has been examined on the generated dataset. The F-score value of 97.27 for ParsBERT and about 94.5 for SVM and CNN classifiers were obtained for the suggestion class which is a promising result as the first research on suggestion classification on Persian texts. Also, the proposed guideline can be used for other NLP tasks, and the generated dataset can be used in other suggestion classification tasks.
建议挖掘已经成为自然语言处理(NLP)领域的一个热门主题,它在服务/产品改进等领域非常有用。本研究的目的是提供一种基于自动机器学习(ML)的方法来从波斯语文本中提取建议。在这项研究中,首先提出了一种新的两步半监督方法来生成一个名为ParsSugg的波斯语数据集,然后将其用于用户建议的自动分类。第一步是根据建议的指南手动标记数据,然后是数据增强阶段。在第二步中,使用预训练的波斯语双向编码器表示(ParsBERT)作为分类器和前一步的数据,对更多的数据进行标记。在生成的数据集上测试了各种ML模型的性能,包括支持向量机(SVM)、随机森林(RF)、卷积神经网络(CNN)、长短期记忆(LSTM)和ParsBERT语言模型。建议类ParsBERT的f值为97.27,SVM和CNN分类器的f值约为94.5,这是第一次对波斯语文本进行建议分类的研究,结果很有希望。此外,该指南可用于其他NLP任务,生成的数据集可用于其他建议分类任务。
{"title":"A semi-supervised method to generate a persian dataset for suggestion classification","authors":"Leila Safari, Zanyar Mohammady","doi":"10.1007/s10579-023-09688-7","DOIUrl":"https://doi.org/10.1007/s10579-023-09688-7","url":null,"abstract":"Suggestion mining has become a popular subject in the field of natural language processing (NLP) that is useful in areas like a service/product improvement. The purpose of this study is to provide an automated machine learning (ML) based approach to extract suggestions from Persian text. In this research, first, a novel two-step semi-supervised method has been proposed to generate a Persian dataset called ParsSugg, which is then used in the automatic classification of the user’s suggestions. The first step is manual labeling of data based on a proposed guideline, followed by a data augmentation phase. In the second step, using pre-trained Persian Bidirectional Encoder Representations from Transformers (ParsBERT) as a classifier and the data from the previous step, more data were labeled. The performance of various ML models, including Support Vector Machine (SVM), Random Forest (RF), Convolutional Neural Networks (CNN), Long Short Term Memory (LSTM), and the ParsBERT language model has been examined on the generated dataset. The F-score value of 97.27 for ParsBERT and about 94.5 for SVM and CNN classifiers were obtained for the suggestion class which is a promising result as the first research on suggestion classification on Persian texts. Also, the proposed guideline can be used for other NLP tasks, and the generated dataset can be used in other suggestion classification tasks.","PeriodicalId":49927,"journal":{"name":"Language Resources and Evaluation","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135199301","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
NEREL: a Russian information extraction dataset with rich annotation for nested entities, relations, and wikidata entity links NEREL:一个俄语信息提取数据集,为嵌套的实体、关系和维基数据实体链接提供了丰富的注释
3区 计算机科学 Q3 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2023-09-21 DOI: 10.1007/s10579-023-09674-z
Natalia Loukachevitch, Ekaterina Artemova, Tatiana Batura, Pavel Braslavski, Vladimir Ivanov, Suresh Manandhar, Alexander Pugachev, Igor Rozhkov, Artem Shelmanov, Elena Tutubalina, Alexey Yandutov
This paper describes NEREL—a Russian news dataset suited for three tasks: nested named entity recognition, relation extraction, and entity linking. Compared to flat entities, nested named entities provide a richer and more complete annotation while also increasing the coverage of relations annotation and entity linking. Relations between nested named entities may cross entity boundaries to connect to shorter entities nested within longer ones, which makes it harder to detect such relations. NEREL is currently the largest Russian dataset annotated with entities and relations: it comprises 29 named entity types and 49 relation types. At the time of writing, the dataset contains 56 K named entities and 39 K relations annotated in 933 person-oriented news articles. NEREL is annotated with relations at three levels: (1) within nested named entities, (2) within sentences, and (3) with relations crossing sentence boundaries. We provide benchmark evaluation of current state-of-the-art methods in all three tasks. The dataset is freely available at https://github.com/nerel-ds/NEREL .
本文描述了nerel -一个适合三个任务的俄语新闻数据集:嵌套命名实体识别、关系提取和实体链接。与平面实体相比,嵌套的命名实体提供了更丰富、更完整的注释,同时也增加了关系注释和实体链接的覆盖范围。嵌套命名实体之间的关系可能跨越实体边界,连接到嵌套在较长实体中的较短实体,这使得检测此类关系变得更加困难。NEREL是目前最大的带有实体和关系注释的俄语数据集:它包括29种命名实体类型和49种关系类型。在撰写本文时,该数据集包含在933篇面向个人的新闻文章中注释的56 K个命名实体和39 K个关系。NEREL用三个层次的关系进行注释:(1)嵌套命名实体内的关系,(2)句子内的关系,以及(3)跨句子边界的关系。我们在所有三个任务中提供当前最先进的方法的基准评估。该数据集可在https://github.com/nerel-ds/NEREL免费获得。
{"title":"NEREL: a Russian information extraction dataset with rich annotation for nested entities, relations, and wikidata entity links","authors":"Natalia Loukachevitch, Ekaterina Artemova, Tatiana Batura, Pavel Braslavski, Vladimir Ivanov, Suresh Manandhar, Alexander Pugachev, Igor Rozhkov, Artem Shelmanov, Elena Tutubalina, Alexey Yandutov","doi":"10.1007/s10579-023-09674-z","DOIUrl":"https://doi.org/10.1007/s10579-023-09674-z","url":null,"abstract":"This paper describes NEREL—a Russian news dataset suited for three tasks: nested named entity recognition, relation extraction, and entity linking. Compared to flat entities, nested named entities provide a richer and more complete annotation while also increasing the coverage of relations annotation and entity linking. Relations between nested named entities may cross entity boundaries to connect to shorter entities nested within longer ones, which makes it harder to detect such relations. NEREL is currently the largest Russian dataset annotated with entities and relations: it comprises 29 named entity types and 49 relation types. At the time of writing, the dataset contains 56 K named entities and 39 K relations annotated in 933 person-oriented news articles. NEREL is annotated with relations at three levels: (1) within nested named entities, (2) within sentences, and (3) with relations crossing sentence boundaries. We provide benchmark evaluation of current state-of-the-art methods in all three tasks. The dataset is freely available at https://github.com/nerel-ds/NEREL .","PeriodicalId":49927,"journal":{"name":"Language Resources and Evaluation","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136136095","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A survey and study impact of tweet sentiment analysis via transfer learning in low resource scenarios 低资源情境下迁移学习对推文情感分析影响的调查研究
3区 计算机科学 Q3 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2023-09-14 DOI: 10.1007/s10579-023-09687-8
Manoel Veríssimo dos Santos Neto, Nádia Félix F. da Silva, Anderson da Silva Soares
Sentiment analysis (SA) is a study area focused on obtaining contextual polarity from the text. Currently, deep learning has obtained outstanding results in this task. However, much annotated data are necessary to train these algorithms, and obtaining this data is expensive and difficult. In the context of low-resource scenarios, this problem is even more significant because there are little available data. Transfer learning (TL) can be used to minimize this problem because it is possible to develop some architectures using fewer data. Language models are a way of applying TL in natural language processing (NLP), and they have achieved competitive results. Nevertheless, some models need many hours of training using many computational resources, and in some contexts, people and organizations do not have the resources to do this. In this paper, we explore the models BERT (Pretraining of Deep Bidirectional Transformers for Language Understanding), MultiFiT (Efficient Multilingual Language Model Fine-tuning), ALBERT (A Lite BERT for Self-supervised Learning of Language Representations), and RoBERTa (A Robustly Optimized BERT Pretraining Approach). In all of our experiments, these models obtain better results than CNN (convolutional neural network) and LSTM (Long Short Term Memory) models. To MultiFiT and RoBERTa models, we propose a pretrained language model (PTLM) using Twitter data. Using this approach, we obtained competitive results compared with the models trained in formal language datasets. The main goal is to show the impacts of TL and language models comparing results with other techniques and showing the computational costs of using these approaches.
情感分析(SA)是研究从文本中获取语境极性的一个研究领域。目前,深度学习在这项任务中取得了突出的成果。然而,训练这些算法需要大量带注释的数据,并且获取这些数据既昂贵又困难。在资源匮乏的情况下,这个问题更加严重,因为可用的数据很少。迁移学习(TL)可以用来最小化这个问题,因为可以使用更少的数据开发一些架构。语言模型是语言学习在自然语言处理(NLP)中应用的一种方法,并取得了较好的成果。然而,一些模型需要使用许多计算资源进行许多小时的训练,并且在某些上下文中,个人和组织没有这样的资源。在本文中,我们探讨了BERT(用于语言理解的深度双向变形预训练)、MultiFiT(高效多语言模型微调)、ALBERT(用于语言表示的自监督学习的生活BERT)和RoBERTa(一种鲁棒优化的BERT预训练方法)模型。在我们所有的实验中,这些模型都获得了比CNN(卷积神经网络)和LSTM(长短期记忆)模型更好的结果。对于MultiFiT和RoBERTa模型,我们提出了一个使用Twitter数据的预训练语言模型(PTLM)。使用这种方法,与在正式语言数据集中训练的模型相比,我们获得了具有竞争力的结果。主要目标是展示TL和语言模型的影响,将结果与其他技术进行比较,并展示使用这些方法的计算成本。
{"title":"A survey and study impact of tweet sentiment analysis via transfer learning in low resource scenarios","authors":"Manoel Veríssimo dos Santos Neto, Nádia Félix F. da Silva, Anderson da Silva Soares","doi":"10.1007/s10579-023-09687-8","DOIUrl":"https://doi.org/10.1007/s10579-023-09687-8","url":null,"abstract":"Sentiment analysis (SA) is a study area focused on obtaining contextual polarity from the text. Currently, deep learning has obtained outstanding results in this task. However, much annotated data are necessary to train these algorithms, and obtaining this data is expensive and difficult. In the context of low-resource scenarios, this problem is even more significant because there are little available data. Transfer learning (TL) can be used to minimize this problem because it is possible to develop some architectures using fewer data. Language models are a way of applying TL in natural language processing (NLP), and they have achieved competitive results. Nevertheless, some models need many hours of training using many computational resources, and in some contexts, people and organizations do not have the resources to do this. In this paper, we explore the models BERT (Pretraining of Deep Bidirectional Transformers for Language Understanding), MultiFiT (Efficient Multilingual Language Model Fine-tuning), ALBERT (A Lite BERT for Self-supervised Learning of Language Representations), and RoBERTa (A Robustly Optimized BERT Pretraining Approach). In all of our experiments, these models obtain better results than CNN (convolutional neural network) and LSTM (Long Short Term Memory) models. To MultiFiT and RoBERTa models, we propose a pretrained language model (PTLM) using Twitter data. Using this approach, we obtained competitive results compared with the models trained in formal language datasets. The main goal is to show the impacts of TL and language models comparing results with other techniques and showing the computational costs of using these approaches.","PeriodicalId":49927,"journal":{"name":"Language Resources and Evaluation","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134912901","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
An eye-tracking-with-EEG coregistration corpus of narrative sentences 叙述句眼动-脑电共配语料库
IF 2.7 3区 计算机科学 Q3 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2023-08-29 DOI: 10.1007/s10579-023-09684-x
S. Frank, Anna Aumeistere
{"title":"An eye-tracking-with-EEG coregistration corpus of narrative sentences","authors":"S. Frank, Anna Aumeistere","doi":"10.1007/s10579-023-09684-x","DOIUrl":"https://doi.org/10.1007/s10579-023-09684-x","url":null,"abstract":"","PeriodicalId":49927,"journal":{"name":"Language Resources and Evaluation","volume":" ","pages":""},"PeriodicalIF":2.7,"publicationDate":"2023-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46749373","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Data augmentation strategies to improve text classification: a use case in smart cities 改进文本分类的数据增强策略:智能城市中的一个用例
IF 2.7 3区 计算机科学 Q3 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2023-08-23 DOI: 10.1007/s10579-023-09685-w
Luciana Bencke, V. Moreira
{"title":"Data augmentation strategies to improve text classification: a use case in smart cities","authors":"Luciana Bencke, V. Moreira","doi":"10.1007/s10579-023-09685-w","DOIUrl":"https://doi.org/10.1007/s10579-023-09685-w","url":null,"abstract":"","PeriodicalId":49927,"journal":{"name":"Language Resources and Evaluation","volume":" ","pages":""},"PeriodicalIF":2.7,"publicationDate":"2023-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47201217","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The development of a labelled te reo Māori–English bilingual database for language technology 开发标记的reo Māori-English语言技术双语数据库
3区 计算机科学 Q3 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2023-08-20 DOI: 10.1007/s10579-023-09680-1
Jesin James, Isabella Shields, Vithya Yogarajan, Peter J. Keegan, Catherine I. Watson, Peter-Lucas Jones, Keoni Mahelona
Te reo Māori (referred to as Māori), New Zealand’s indigenous language, is under-resourced in language technology. Māori speakers are bilingual, where Māori is code-switched with English. Unfortunately, there are minimal resources available for Māori language technology, language detection and code-switch detection between Māori–English pair. Both English and Māori use Roman-derived orthography making rule-based systems for detecting language and code-switching restrictive. Most Māori language detection is done manually by language experts. This research builds a Māori–English bilingual database of 66,016,807 words with word-level language annotation. The New Zealand Parliament Hansard debates reports were used to build the database. The language labels are assigned automatically using language-specific rules and expert manual annotations. Words with the same spelling, but different meanings, exist for Māori and English. These words could not be categorised as Māori or English based on word-level language rules. Hence, manual annotations were necessary. An analysis reporting the various aspects of the database such as metadata, year-wise analysis, frequently occurring words, sentence length and N-grams is also reported. The database developed here is a valuable tool for future language and speech technology development for Aotearoa New Zealand. The methodology followed to label the database can also be followed by other low-resourced language pairs.
reo语Māori(简称Māori)是新西兰的土著语言,在语言技术方面资源不足。Māori的使用者是双语的,而Māori是英语的代码转换。不幸的是,用于Māori语言技术、语言检测和Māori-English对之间的代码切换检测的资源很少。英语和Māori都使用源自罗马的正字法,使基于规则的系统用于检测语言和代码转换。大多数Māori语言检测都是由语言专家手动完成的。本研究构建了一个包含66,016,807个单词的Māori-English双语数据库,并进行了词级语言标注。新西兰议会议事录的辩论报告被用来建立数据库。语言标签使用特定于语言的规则和专家手动注释自动分配。Māori和英语中存在拼写相同但含义不同的单词。这些单词不能根据单词级别的语言规则归类为Māori或英语。因此,手工注释是必要的。还报告了报告数据库各个方面的分析,如元数据、年度分析、频繁出现的单词、句子长度和n -gram。这里开发的数据库是新西兰未来语言和语音技术发展的宝贵工具。其他资源较少的语言对也可以遵循用于标记数据库的方法。
{"title":"The development of a labelled te reo Māori–English bilingual database for language technology","authors":"Jesin James, Isabella Shields, Vithya Yogarajan, Peter J. Keegan, Catherine I. Watson, Peter-Lucas Jones, Keoni Mahelona","doi":"10.1007/s10579-023-09680-1","DOIUrl":"https://doi.org/10.1007/s10579-023-09680-1","url":null,"abstract":"Te reo Māori (referred to as Māori), New Zealand’s indigenous language, is under-resourced in language technology. Māori speakers are bilingual, where Māori is code-switched with English. Unfortunately, there are minimal resources available for Māori language technology, language detection and code-switch detection between Māori–English pair. Both English and Māori use Roman-derived orthography making rule-based systems for detecting language and code-switching restrictive. Most Māori language detection is done manually by language experts. This research builds a Māori–English bilingual database of 66,016,807 words with word-level language annotation. The New Zealand Parliament Hansard debates reports were used to build the database. The language labels are assigned automatically using language-specific rules and expert manual annotations. Words with the same spelling, but different meanings, exist for Māori and English. These words could not be categorised as Māori or English based on word-level language rules. Hence, manual annotations were necessary. An analysis reporting the various aspects of the database such as metadata, year-wise analysis, frequently occurring words, sentence length and N-grams is also reported. The database developed here is a valuable tool for future language and speech technology development for Aotearoa New Zealand. The methodology followed to label the database can also be followed by other low-resourced language pairs.","PeriodicalId":49927,"journal":{"name":"Language Resources and Evaluation","volume":"75 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-08-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135876929","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Comparative performance of ensemble machine learning for Arabic cyberbullying and offensive language detection 集成机器学习在阿拉伯网络欺凌和攻击性语言检测中的比较性能
IF 2.7 3区 计算机科学 Q3 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2023-08-13 DOI: 10.1007/s10579-023-09683-y
M. Khairy, Tarek M. Mahmoud, Ahmed Omar, Tarek Abd El-Hafeez
{"title":"Comparative performance of ensemble machine learning for Arabic cyberbullying and offensive language detection","authors":"M. Khairy, Tarek M. Mahmoud, Ahmed Omar, Tarek Abd El-Hafeez","doi":"10.1007/s10579-023-09683-y","DOIUrl":"https://doi.org/10.1007/s10579-023-09683-y","url":null,"abstract":"","PeriodicalId":49927,"journal":{"name":"Language Resources and Evaluation","volume":" ","pages":""},"PeriodicalIF":2.7,"publicationDate":"2023-08-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44624553","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
RUN-AS: a novel approach to annotate news reliability for disinformation detection RUN-AS:一种用于虚假信息检测的标注新闻可靠性的新方法
IF 2.7 3区 计算机科学 Q3 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2023-08-06 DOI: 10.1007/s10579-023-09678-9
Alba Bonet-Jover, Robiert Sepúlveda-Torres, E. Saquete, P. Martínez-Barco, Mario Nieto-Pérez
{"title":"RUN-AS: a novel approach to annotate news reliability for disinformation detection","authors":"Alba Bonet-Jover, Robiert Sepúlveda-Torres, E. Saquete, P. Martínez-Barco, Mario Nieto-Pérez","doi":"10.1007/s10579-023-09678-9","DOIUrl":"https://doi.org/10.1007/s10579-023-09678-9","url":null,"abstract":"","PeriodicalId":49927,"journal":{"name":"Language Resources and Evaluation","volume":" ","pages":""},"PeriodicalIF":2.7,"publicationDate":"2023-08-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44243946","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The limitations of irony detection in Dutch social media 荷兰社交媒体中反讽检测的局限性
IF 2.7 3区 计算机科学 Q3 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2023-07-23 DOI: 10.1007/s10579-023-09656-1
Aaron Maladry, Els Lefever, Cynthia Van Hee, Veronique Hoste
{"title":"The limitations of irony detection in Dutch social media","authors":"Aaron Maladry, Els Lefever, Cynthia Van Hee, Veronique Hoste","doi":"10.1007/s10579-023-09656-1","DOIUrl":"https://doi.org/10.1007/s10579-023-09656-1","url":null,"abstract":"","PeriodicalId":49927,"journal":{"name":"Language Resources and Evaluation","volume":" ","pages":""},"PeriodicalIF":2.7,"publicationDate":"2023-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46825933","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
期刊
Language Resources and Evaluation
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1