首页 > 最新文献

Natural Language Engineering最新文献

英文 中文
Data-to-text generation using conditional generative adversarial with enhanced transformer 使用条件生成对抗和增强的转换器进行数据到文本的生成
IF 2.5 3区 计算机科学 Q1 Arts and Humanities Pub Date : 2023-11-28 DOI: 10.1017/s1351324923000487
Elham Seifossadat, Hossein Sameti
In this paper, we propose an enhanced version of the vanilla transformer for data-to-text generation and then use it as the generator of a conditional generative adversarial model to improve the semantic quality and diversity of output sentences. Specifically, by adding a diagonal mask matrix to the attention scores of the encoder and using the history of the attention weights in the decoder, this enhanced version of the vanilla transformer prevents semantic defects in the output text. Also, using this enhanced transformer along with a triplet network, respectively, as the generator and discriminator of conditional generative adversarial network, diversity and semantic quality of sentences are guaranteed. To prove the effectiveness of the proposed model, called conditional generative adversarial with enhanced transformer (CGA-ET), we performed experiments on three different datasets and observed that our proposed model is able to achieve better results than the baselines models in terms of BLEU, METEOR, NIST, ROUGE-L, CIDEr, BERTScore, and SER automatic evaluation metrics as well as human evaluation.
在本文中,我们提出了一个用于数据到文本生成的香草转换器的增强版本,然后将其用作条件生成对抗模型的生成器,以提高输出句子的语义质量和多样性。具体来说,通过向编码器的注意力分数添加对角掩码矩阵,并使用解码器中注意力权重的历史记录,这个增强版的香草转换器可以防止输出文本中的语义缺陷。同时,将该增强的变压器与一个三重网络分别作为条件生成对抗网络的生成器和判别器,保证了句子的多样性和语义质量。为了证明所提出的条件生成对抗增强变压器(CGA-ET)模型的有效性,我们在三个不同的数据集上进行了实验,并观察到我们所提出的模型在BLEU、METEOR、NIST、ROUGE-L、CIDEr、BERTScore和SER自动评估指标以及人类评估方面能够取得比基线模型更好的结果。
{"title":"Data-to-text generation using conditional generative adversarial with enhanced transformer","authors":"Elham Seifossadat, Hossein Sameti","doi":"10.1017/s1351324923000487","DOIUrl":"https://doi.org/10.1017/s1351324923000487","url":null,"abstract":"In this paper, we propose an enhanced version of the vanilla transformer for data-to-text generation and then use it as the generator of a conditional generative adversarial model to improve the semantic quality and diversity of output sentences. Specifically, by adding a diagonal mask matrix to the attention scores of the encoder and using the history of the attention weights in the decoder, this enhanced version of the vanilla transformer prevents semantic defects in the output text. Also, using this enhanced transformer along with a triplet network, respectively, as the generator and discriminator of conditional generative adversarial network, diversity and semantic quality of sentences are guaranteed. To prove the effectiveness of the proposed model, called conditional generative adversarial with enhanced transformer (CGA-ET), we performed experiments on three different datasets and observed that our proposed model is able to achieve better results than the baselines models in terms of BLEU, METEOR, NIST, ROUGE-L, CIDEr, BERTScore, and SER automatic evaluation metrics as well as human evaluation.","PeriodicalId":49143,"journal":{"name":"Natural Language Engineering","volume":null,"pages":null},"PeriodicalIF":2.5,"publicationDate":"2023-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138506467","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Abstractive summarization with deep reinforcement learning using semantic similarity rewards 基于语义相似度奖励的深度强化学习抽象摘要
3区 计算机科学 Q1 Arts and Humanities Pub Date : 2023-10-31 DOI: 10.1017/s1351324923000505
Figen Beken Fikri, Kemal Oflazer, Berrin Yanıkoğlu
Abstract Abstractive summarization is an approach to document summarization that is not limited to selecting sentences from the document but can generate new sentences as well. We address the two main challenges in abstractive summarization: how to evaluate the performance of a summarization model and what is a good training objective. We first introduce new evaluation measures based on the semantic similarity of the input and corresponding summary. The similarity scores are obtained by the fine-tuned BERTurk model using either the cross-encoder or a bi-encoder architecture. The fine-tuning is done on the Turkish Natural Language Inference and Semantic Textual Similarity benchmark datasets. We show that these measures have better correlations with human evaluations compared to Recall-Oriented Understudy for Gisting Evaluation (ROUGE) scores and BERTScore. We then introduce a deep reinforcement learning algorithm that uses the proposed semantic similarity measures as rewards, together with a mixed training objective, in order to generate more natural summaries in terms of human readability. We show that training with a mixed training objective function compared to only the maximum-likelihood objective improves similarity scores.
摘要抽象摘要是一种文档摘要方法,它不仅限于从文档中选择句子,而且还可以生成新的句子。我们解决了抽象摘要中的两个主要挑战:如何评估摘要模型的性能以及什么是一个好的训练目标。我们首先引入了基于输入的语义相似度和相应的摘要的评价方法。相似性分数是由微调BERTurk模型使用交叉编码器或双编码器架构获得的。在土耳其语自然语言推理和语义文本相似度基准数据集上进行了微调。我们发现,与记忆导向的注册评估(ROUGE)分数和BERTScore分数相比,这些指标与人类评价有更好的相关性。然后,我们引入了一种深度强化学习算法,该算法使用所提出的语义相似性度量作为奖励,以及混合训练目标,以便在人类可读性方面生成更自然的摘要。我们表明,与只有最大似然目标相比,使用混合训练目标函数的训练提高了相似度得分。
{"title":"Abstractive summarization with deep reinforcement learning using semantic similarity rewards","authors":"Figen Beken Fikri, Kemal Oflazer, Berrin Yanıkoğlu","doi":"10.1017/s1351324923000505","DOIUrl":"https://doi.org/10.1017/s1351324923000505","url":null,"abstract":"Abstract Abstractive summarization is an approach to document summarization that is not limited to selecting sentences from the document but can generate new sentences as well. We address the two main challenges in abstractive summarization: how to evaluate the performance of a summarization model and what is a good training objective. We first introduce new evaluation measures based on the semantic similarity of the input and corresponding summary. The similarity scores are obtained by the fine-tuned BERTurk model using either the cross-encoder or a bi-encoder architecture. The fine-tuning is done on the Turkish Natural Language Inference and Semantic Textual Similarity benchmark datasets. We show that these measures have better correlations with human evaluations compared to Recall-Oriented Understudy for Gisting Evaluation (ROUGE) scores and BERTScore. We then introduce a deep reinforcement learning algorithm that uses the proposed semantic similarity measures as rewards, together with a mixed training objective, in order to generate more natural summaries in terms of human readability. We show that training with a mixed training objective function compared to only the maximum-likelihood objective improves similarity scores.","PeriodicalId":49143,"journal":{"name":"Natural Language Engineering","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135863816","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Neural Arabic singular-to-plural conversion using a pretrained Character-BERT and a fused transformer 使用预训练Character-BERT和熔断变压器的神经阿拉伯语单数到复数转换
3区 计算机科学 Q1 Arts and Humanities Pub Date : 2023-10-11 DOI: 10.1017/s1351324923000475
Azzam Radman, Mohammed Atros, Rehab Duwairi
Abstract Morphological re-inflection generation is one of the most challenging tasks in the natural language processing (NLP) domain, especially with morphologically rich, low-resource languages like Arabic. In this research, we investigate the ability of transformer-based models in the singular-to-plural Arabic noun conversion task. We start with pretraining a Character-BERT model on a masked language modeling task using 1,134,950 Arabic words and then adopting the fusion technique to transfer the knowledge gained by the pretrained model to a full encoder–decoder transformer model, in one of the proposed settings. The second proposed setting directly fuses the output Character-BERT embeddings into the decoder. We then analyze and compare the performance of the two architectures and provide an interpretability section in which we track the features of attention with respect to the model. We perform the interpretation on both the macro and micro levels, providing some individual examples. Moreover, we provide a thorough error analysis showing the strengths and weaknesses of the proposed framework. To the best of our knowledge, this is the first effort in the Arabic NLP domain that adopts the development of an end-to-end fused-transformer deep learning model to address the problem of singular-to-plural conversion.
形态重屈折的生成是自然语言处理(NLP)领域中最具挑战性的任务之一,特别是对于像阿拉伯语这样形态丰富、资源匮乏的语言。在本研究中,我们研究了基于变压器的模型在阿拉伯名词单复数转换任务中的能力。我们首先在使用1,134,950个阿拉伯单词的掩码语言建模任务上预训练Character-BERT模型,然后采用融合技术将预训练模型获得的知识转移到一个完整的编码器-解码器转换器模型中,在其中一个建议的设置中。第二种建议直接将输出的Character-BERT嵌入融合到解码器中。然后,我们分析和比较了两种架构的性能,并提供了一个可解释性部分,在该部分中,我们跟踪了与模型相关的注意力特征。我们从宏观和微观两个层面进行解释,并提供了一些单独的例子。此外,我们提供了一个彻底的错误分析,显示了所提议的框架的优点和缺点。据我们所知,这是阿拉伯语NLP领域首次采用端到端融合变压器深度学习模型来解决单数到复数转换问题。
{"title":"Neural Arabic singular-to-plural conversion using a pretrained Character-BERT and a fused transformer","authors":"Azzam Radman, Mohammed Atros, Rehab Duwairi","doi":"10.1017/s1351324923000475","DOIUrl":"https://doi.org/10.1017/s1351324923000475","url":null,"abstract":"Abstract Morphological re-inflection generation is one of the most challenging tasks in the natural language processing (NLP) domain, especially with morphologically rich, low-resource languages like Arabic. In this research, we investigate the ability of transformer-based models in the singular-to-plural Arabic noun conversion task. We start with pretraining a Character-BERT model on a masked language modeling task using 1,134,950 Arabic words and then adopting the fusion technique to transfer the knowledge gained by the pretrained model to a full encoder–decoder transformer model, in one of the proposed settings. The second proposed setting directly fuses the output Character-BERT embeddings into the decoder. We then analyze and compare the performance of the two architectures and provide an interpretability section in which we track the features of attention with respect to the model. We perform the interpretation on both the macro and micro levels, providing some individual examples. Moreover, we provide a thorough error analysis showing the strengths and weaknesses of the proposed framework. To the best of our knowledge, this is the first effort in the Arabic NLP domain that adopts the development of an end-to-end fused-transformer deep learning model to address the problem of singular-to-plural conversion.","PeriodicalId":49143,"journal":{"name":"Natural Language Engineering","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136209486","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Perceptional and actional enrichment for metaphor detection with sensorimotor norms 用感觉运动规范进行隐喻检测的感知和行为富集
3区 计算机科学 Q1 Arts and Humanities Pub Date : 2023-09-20 DOI: 10.1017/s135132492300044x
Mingyu Wan, Qi Su, Kathleen Ahrens, Chu-Ren Huang
Abstract Understanding the nature of meaning and its extensions (with metaphor as one typical kind) has been one core issue in figurative language study since Aristotle’s time. This research takes a computational cognitive perspective to model metaphor based on the assumption that meaning is perceptual, embodied, and encyclopedic. We model word meaning representation for metaphor detection with embodiment information obtained from behavioral experiments. Our work is the first attempt to incorporate sensorimotor knowledge into neural networks for metaphor detection, and demonstrates superiority, consistency, and interpretability compared to peer systems based on two general datasets. In addition, with cross-sectional analysis of different feature schemas, our results suggest that metaphor, as a device of cognitive conceptualization, can be ‘learned’ from the perceptual and actional information independent of several more explicit levels of linguistic representation. The access to such knowledge allows us to probe further into word meaning mapping tendencies relevant to our conceptualization and reaction to the physical world.
自亚里士多德时代以来,理解意义的本质及其延伸(隐喻是其中一种典型的延伸)一直是比喻语言研究的核心问题。本研究从计算认知的角度对隐喻进行建模,假设意义是感性的、具身的和百科全书式的。我们利用行为实验中获得的体现信息对隐喻检测的词义表示建模。我们的工作是第一次尝试将感觉运动知识整合到神经网络中进行隐喻检测,并与基于两个通用数据集的同类系统相比,展示了优越性、一致性和可解释性。此外,通过对不同特征图式的横断面分析,我们的研究结果表明,隐喻作为一种认知概念化的手段,可以独立于几个更明确的语言表征层面,从感知和行为信息中“学习”出来。获得这些知识使我们能够进一步探索与我们对物质世界的概念化和反应相关的词义映射趋势。
{"title":"Perceptional and actional enrichment for metaphor detection with sensorimotor norms","authors":"Mingyu Wan, Qi Su, Kathleen Ahrens, Chu-Ren Huang","doi":"10.1017/s135132492300044x","DOIUrl":"https://doi.org/10.1017/s135132492300044x","url":null,"abstract":"Abstract Understanding the nature of meaning and its extensions (with metaphor as one typical kind) has been one core issue in figurative language study since Aristotle’s time. This research takes a computational cognitive perspective to model metaphor based on the assumption that meaning is perceptual, embodied, and encyclopedic. We model word meaning representation for metaphor detection with embodiment information obtained from behavioral experiments. Our work is the first attempt to incorporate sensorimotor knowledge into neural networks for metaphor detection, and demonstrates superiority, consistency, and interpretability compared to peer systems based on two general datasets. In addition, with cross-sectional analysis of different feature schemas, our results suggest that metaphor, as a device of cognitive conceptualization, can be ‘learned’ from the perceptual and actional information independent of several more explicit levels of linguistic representation. The access to such knowledge allows us to probe further into word meaning mapping tendencies relevant to our conceptualization and reaction to the physical world.","PeriodicalId":49143,"journal":{"name":"Natural Language Engineering","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136313777","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Improved conversational recommender system based on dialog context 改进的基于对话上下文的会话推荐系统
IF 2.5 3区 计算机科学 Q1 Arts and Humanities Pub Date : 2023-09-08 DOI: 10.1017/s1351324923000451
Xiaoyi Wang, Jie Liu, Jianyong Duan
Conversational recommender system (CRS) needs to be seamlessly integrated between the two modules of recommendation and dialog, aiming to recommend high-quality items to users through multiple rounds of interactive dialogs. Items can typically refer to goods, movies, news, etc. Through this form of interactive dialog, users can express their preferences in real time, and the system can fully understand the user’s thoughts and recommend corresponding items. Although mainstream dialog recommendation systems have improved the performance to some extent, there are still some key issues, such as insufficient consideration of the entity’s order in the dialog, the different contributions of items in the dialog history, and the low diversity of generated responses. To address these shortcomings, we propose an improved dialog context model based on time-series features. Firstly, we augment the semantic representation of words and items using two external knowledge graphs and align the semantic space using mutual information maximization techniques. Secondly, we add a retrieval model to the dialog recommendation system to provide auxiliary information for generating replies. We then utilize a deep timing network to serialize the dialog content and more accurately learn the feature relationship between users and items for recommendation. In this paper, the dialog recommendation system is divided into two components, and different evaluation indicators are used to evaluate the performance of the dialog component and the recommendation component. Experimental results on widely used benchmarks show that the proposed method is effective.
会话推荐系统(CRS)需要在推荐和对话两个模块之间无缝集成,旨在通过多轮互动对话向用户推荐高质量的商品。Items通常指商品、电影、新闻等。通过这种形式的互动对话,用户可以实时表达自己的喜好,系统可以充分了解用户的想法,并推荐相应的物品。虽然主流的对话推荐系统在一定程度上提高了性能,但仍然存在一些关键问题,如没有充分考虑实体在对话中的顺序,对话历史中项目的贡献不同,生成的响应多样性低。为了解决这些不足,我们提出了一种改进的基于时间序列特征的对话上下文模型。首先,我们使用两个外部知识图增强词和项的语义表示,并使用互信息最大化技术对齐语义空间。其次,在对话框推荐系统中加入检索模型,为回复生成提供辅助信息;然后,我们利用深度时序网络对对话内容进行序列化,更准确地学习用户和项目之间的特征关系进行推荐。本文将对话推荐系统分为两个组成部分,并采用不同的评价指标对对话组件和推荐组件的性能进行评价。在广泛使用的基准测试上的实验结果表明,该方法是有效的。
{"title":"Improved conversational recommender system based on dialog context","authors":"Xiaoyi Wang, Jie Liu, Jianyong Duan","doi":"10.1017/s1351324923000451","DOIUrl":"https://doi.org/10.1017/s1351324923000451","url":null,"abstract":"\u0000 Conversational recommender system (CRS) needs to be seamlessly integrated between the two modules of recommendation and dialog, aiming to recommend high-quality items to users through multiple rounds of interactive dialogs. Items can typically refer to goods, movies, news, etc. Through this form of interactive dialog, users can express their preferences in real time, and the system can fully understand the user’s thoughts and recommend corresponding items. Although mainstream dialog recommendation systems have improved the performance to some extent, there are still some key issues, such as insufficient consideration of the entity’s order in the dialog, the different contributions of items in the dialog history, and the low diversity of generated responses. To address these shortcomings, we propose an improved dialog context model based on time-series features. Firstly, we augment the semantic representation of words and items using two external knowledge graphs and align the semantic space using mutual information maximization techniques. Secondly, we add a retrieval model to the dialog recommendation system to provide auxiliary information for generating replies. We then utilize a deep timing network to serialize the dialog content and more accurately learn the feature relationship between users and items for recommendation. In this paper, the dialog recommendation system is divided into two components, and different evaluation indicators are used to evaluate the performance of the dialog component and the recommendation component. Experimental results on widely used benchmarks show that the proposed method is effective.","PeriodicalId":49143,"journal":{"name":"Natural Language Engineering","volume":null,"pages":null},"PeriodicalIF":2.5,"publicationDate":"2023-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45260796","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Emerging trends: Smooth-talking machines 新兴趋势:能说会道的机器
3区 计算机科学 Q1 Arts and Humanities Pub Date : 2023-09-01 DOI: 10.1017/s1351324923000463
Kenneth Ward Church, Richard Yue
Abstract Large language models (LLMs) have achieved amazing successes. They have done well on standardized tests in medicine and the law. That said, the bar has been raised so high that it could take decades to make good on expectations. To buy time for this long-term research program, the field needs to identify some good short-term applications for smooth-talking machines that are more fluent than trustworthy.
大型语言模型(llm)已经取得了惊人的成功。他们在医学和法律的标准化考试中成绩优异。话虽如此,但门槛已经提高得如此之高,以至于可能需要几十年的时间才能实现预期。为了为这个长期的研究项目争取时间,该领域需要确定一些好的短期应用,让会说话的机器更流利,而不是更值得信赖。
{"title":"Emerging trends: Smooth-talking machines","authors":"Kenneth Ward Church, Richard Yue","doi":"10.1017/s1351324923000463","DOIUrl":"https://doi.org/10.1017/s1351324923000463","url":null,"abstract":"Abstract Large language models (LLMs) have achieved amazing successes. They have done well on standardized tests in medicine and the law. That said, the bar has been raised so high that it could take decades to make good on expectations. To buy time for this long-term research program, the field needs to identify some good short-term applications for smooth-talking machines that are more fluent than trustworthy.","PeriodicalId":49143,"journal":{"name":"Natural Language Engineering","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135248827","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A study towards contextual understanding of toxicity in online conversations 网络对话中毒性的语境理解研究
IF 2.5 3区 计算机科学 Q1 Arts and Humanities Pub Date : 2023-08-30 DOI: 10.1017/s1351324923000414
P. Madhyastha, Antigoni-Maria Founta, Lucia Specia
Identifying and annotating toxic online content on social media platforms is an extremely challenging problem. Work that studies toxicity in online content has predominantly focused on comments as independent entities. However, comments on social media are inherently conversational, and therefore, understanding and judging the comments fundamentally requires access to the context in which they are made. We introduce a study and resulting annotated dataset where we devise a number of controlled experiments on the importance of context and other observable confounders – namely gender, age and political orientation – towards the perception of toxicity in online content. Our analysis clearly shows the significance of context and the effect of observable confounders on annotations. Namely, we observe that the ratio of toxic to non-toxic judgements can be very different for each control group, and a higher proportion of samples are judged toxic in the presence of contextual information.
识别和注释社交媒体平台上的有毒在线内容是一个极具挑战性的问题。研究网络内容毒性的工作主要集中在作为独立实体的评论上。然而,社交媒体上的评论本质上是对话性的,因此,理解和判断评论从根本上来说需要进入评论的语境。我们介绍了一项研究和由此产生的注释数据集,其中我们设计了许多关于背景和其他可观察到的混杂因素(即性别、年龄和政治倾向)对在线内容毒性感知的重要性的对照实验。我们的分析清楚地显示了上下文的重要性和可观察到的混杂因素对注释的影响。也就是说,我们观察到每个对照组的毒性与无毒判断的比例可能非常不同,并且在存在上下文信息的情况下,更高比例的样本被判断为有毒。
{"title":"A study towards contextual understanding of toxicity in online conversations","authors":"P. Madhyastha, Antigoni-Maria Founta, Lucia Specia","doi":"10.1017/s1351324923000414","DOIUrl":"https://doi.org/10.1017/s1351324923000414","url":null,"abstract":"\u0000 Identifying and annotating toxic online content on social media platforms is an extremely challenging problem. Work that studies toxicity in online content has predominantly focused on comments as independent entities. However, comments on social media are inherently conversational, and therefore, understanding and judging the comments fundamentally requires access to the context in which they are made. We introduce a study and resulting annotated dataset where we devise a number of controlled experiments on the importance of context and other observable confounders – namely gender, age and political orientation – towards the perception of toxicity in online content. Our analysis clearly shows the significance of context and the effect of observable confounders on annotations. Namely, we observe that the ratio of toxic to non-toxic judgements can be very different for each control group, and a higher proportion of samples are judged toxic in the presence of contextual information.","PeriodicalId":49143,"journal":{"name":"Natural Language Engineering","volume":null,"pages":null},"PeriodicalIF":2.5,"publicationDate":"2023-08-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43696641","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
PGST: A Persian gender style transfer method 一种波斯语的性别风格转换方法
IF 2.5 3区 计算机科学 Q1 Arts and Humanities Pub Date : 2023-08-15 DOI: 10.1017/s1351324923000426
Reza Khanmohammadi, S. Mirroshandel
Recent developments in text style transfer have led this field to be more highlighted than ever. There are many challenges associated with transferring the style of input text such as fluency and content preservation that need to be addressed. In this research, we present PGST, a novel Persian text style transfer approach in the gender domain, composed of different constituent elements. Established on the significance of parts of speech tags, our method is the first that successfully transfers the gendered linguistic style of Persian text. We have proceeded with a pre-trained word embedding for token replacement purposes, a character-based token classifier for gender exchange purposes, and a beam search algorithm for extracting the most fluent combination. Since different approaches are introduced in our research, we determine a trade-off value for evaluating different models’ success in faking our gender identification model with transferred text. Our research focuses primarily on Persian, but since there is no Persian baseline available, we applied our method to a highly studied gender-tagged English corpus and compared it to state-of-the-art English variants to demonstrate its applicability. Our final approach successfully defeated English and Persian gender identification models by 45.6% and 39.2%, respectively.
文本风格转换的最新发展使这一领域比以往任何时候都更加突出。在转换输入文本的风格方面存在许多挑战,如流畅性和内容保存,这些都需要解决。在这项研究中,我们提出了PGST,这是一种在性别领域的波斯文本风格转移方法,由不同的组成元素组成。基于词性标签的意义,我们的方法首次成功地转移了波斯文本的性别语言风格。我们已经着手进行了用于标记替换目的的预训练单词嵌入,用于性别交换目的的基于字符的标记分类器,以及用于提取最流畅组合的波束搜索算法。由于在我们的研究中引入了不同的方法,我们确定了一个权衡值,用于评估不同模型在用转移文本伪造我们的性别认同模型方面的成功。我们的研究主要集中在波斯语上,但由于没有可用的波斯语基线,我们将我们的方法应用于一个经过高度研究的带有性别标签的英语语料库,并将其与最先进的英语变体进行比较,以证明其适用性。我们的最终方法分别以45.6%和39.2%的优势成功击败了英语和波斯语的性别识别模型。
{"title":"PGST: A Persian gender style transfer method","authors":"Reza Khanmohammadi, S. Mirroshandel","doi":"10.1017/s1351324923000426","DOIUrl":"https://doi.org/10.1017/s1351324923000426","url":null,"abstract":"\u0000 Recent developments in text style transfer have led this field to be more highlighted than ever. There are many challenges associated with transferring the style of input text such as fluency and content preservation that need to be addressed. In this research, we present PGST, a novel Persian text style transfer approach in the gender domain, composed of different constituent elements. Established on the significance of parts of speech tags, our method is the first that successfully transfers the gendered linguistic style of Persian text. We have proceeded with a pre-trained word embedding for token replacement purposes, a character-based token classifier for gender exchange purposes, and a beam search algorithm for extracting the most fluent combination. Since different approaches are introduced in our research, we determine a trade-off value for evaluating different models’ success in faking our gender identification model with transferred text. Our research focuses primarily on Persian, but since there is no Persian baseline available, we applied our method to a highly studied gender-tagged English corpus and compared it to state-of-the-art English variants to demonstrate its applicability. Our final approach successfully defeated English and Persian gender identification models by 45.6% and 39.2%, respectively.","PeriodicalId":49143,"journal":{"name":"Natural Language Engineering","volume":null,"pages":null},"PeriodicalIF":2.5,"publicationDate":"2023-08-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48192954","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Creating a large-scale diachronic corpus resource: Automated parsing in the Greek papyri (and beyond) 创建大规模历时语料库资源:希腊纸莎草书中的自动解析(及其后)
IF 2.5 3区 计算机科学 Q1 Arts and Humanities Pub Date : 2023-08-15 DOI: 10.1017/s1351324923000384
Alek Keersmaekers, Toon van Hal
This paper explores how to syntactically parse Ancient Greek texts automatically and maps ways of fruitfully employing the results of such an automated analysis. Special attention is given to documentary papyrus texts, a large diachronic corpus of non-literary Greek, which presents a unique set of challenges to tackle. By making use of the Stanford Graph-Based Neural Dependency Parser, we show that through careful curation of the parsing data and several manipulation strategies, it is possible to achieve an Labeled Attachment Score of about 0.85 for this corpus. We also explain how the data can be converted back to its original (Ancient Greek Dependency Treebanks) format. We describe the results of several tests we have carried out to improve parsing results, with special attention paid to the impact of the annotation format on parser achievements. In addition, we offer a detailed qualitative analysis of the remaining errors, including possible ways to solve them. Moreover, the paper gives an overview of the valorisation possibilities of an automatically annotated corpus of Ancient Greek texts in the fields of linguistics, language education and humanities studies in general. The concluding section critically analyses the remaining difficulties and outlines avenues to further improve the parsing quality and the ensuing practical applications.
本文探讨了如何自动语法分析古希腊文本,并绘制了如何有效利用这种自动分析结果的方法。特别关注的是文献纸莎草文本,这是一个非文学希腊语的大型历时语料库,它提出了一系列独特的挑战。通过使用基于斯坦福图的神经依赖性解析器,我们表明,通过仔细管理解析数据和几种操作策略,该语料库的标记依恋得分可能达到0.85左右。我们还解释了如何将数据转换回其原始格式(古希腊依赖树库)。我们描述了为改进解析结果而进行的几次测试的结果,并特别注意注释格式对解析结果的影响。此外,我们对剩余的错误进行了详细的定性分析,包括解决这些错误的可能方法。此外,本文还概述了古希腊文本自动注释语料库在语言学、语言教育和人文科学研究领域的估价可能性。结论部分批判性地分析了剩余的困难,并概述了进一步提高解析质量的途径和随后的实际应用。
{"title":"Creating a large-scale diachronic corpus resource: Automated parsing in the Greek papyri (and beyond)","authors":"Alek Keersmaekers, Toon van Hal","doi":"10.1017/s1351324923000384","DOIUrl":"https://doi.org/10.1017/s1351324923000384","url":null,"abstract":"\u0000 This paper explores how to syntactically parse Ancient Greek texts automatically and maps ways of fruitfully employing the results of such an automated analysis. Special attention is given to documentary papyrus texts, a large diachronic corpus of non-literary Greek, which presents a unique set of challenges to tackle. By making use of the Stanford Graph-Based Neural Dependency Parser, we show that through careful curation of the parsing data and several manipulation strategies, it is possible to achieve an Labeled Attachment Score of about 0.85 for this corpus. We also explain how the data can be converted back to its original (Ancient Greek Dependency Treebanks) format. We describe the results of several tests we have carried out to improve parsing results, with special attention paid to the impact of the annotation format on parser achievements. In addition, we offer a detailed qualitative analysis of the remaining errors, including possible ways to solve them. Moreover, the paper gives an overview of the valorisation possibilities of an automatically annotated corpus of Ancient Greek texts in the fields of linguistics, language education and humanities studies in general. The concluding section critically analyses the remaining difficulties and outlines avenues to further improve the parsing quality and the ensuing practical applications.","PeriodicalId":49143,"journal":{"name":"Natural Language Engineering","volume":null,"pages":null},"PeriodicalIF":2.5,"publicationDate":"2023-08-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45255126","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Toward a shallow discourse parser for Turkish 浅谈土耳其语语篇解析器
IF 2.5 3区 计算机科学 Q1 Arts and Humanities Pub Date : 2023-08-11 DOI: 10.1017/s1351324923000359
Ferhat Kutlu, Deniz Zeyrek, Murathan Kurfali
One of the most interesting aspects of natural language is how texts cohere, which involves the pragmatic or semantic relations that hold between clauses (addition, cause-effect, conditional, similarity), referred to as discourse relations. A focus on the identification and classification of discourse relations appears as an imperative challenge to be resolved to support tasks such as text summarization, dialogue systems, and machine translation that need information above the clause level. Despite the recent interest in discourse relations in well-known languages such as English, data and experiments are still needed for typologically different and less-resourced languages. We report the most comprehensive investigation of shallow discourse parsing in Turkish, focusing on two main sub-tasks: identification of discourse relation realization types and the sense classification of explicit and implicit relations. The work is based on the approach of fine-tuning a pre-trained language model (BERT) as an encoder and classifying the encoded data with neural network-based classifiers. We firstly identify the discourse relation realization type that holds in a given text, if there is any. Then, we move on to the sense classification of the identified explicit and implicit relations. In addition to in-domain experiments on a held-out test set from the Turkish Discourse Bank (TDB 1.2), we also report the out-domain performance of our models in order to evaluate its generalization abilities, using the Turkish part of the TED Multilingual Discourse Bank. Finally, we explore the effect of multilingual data aggregation on the classification of relation realization type through a cross-lingual experiment. The results suggest that our models perform relatively well despite the limited size of the TDB 1.2 and that there are language-specific aspects of detecting the types of discourse relation realization. We believe that the findings are important both in providing insights regarding the performance of the modern language models in a typologically different language and in the low-resource scenario, given that the TDB 1.2 is 1/20th of the Penn Discourse TreeBank in terms of the number of total relations.
自然语言最有趣的方面之一是文本的连贯性,它涉及从句之间的语用或语义关系(附加、因果、条件、相似性),称为话语关系。关注话语关系的识别和分类似乎是一个迫切需要解决的挑战,以支持文本摘要、对话系统和机器翻译等需要子句级别以上信息的任务。尽管最近人们对英语等知名语言的话语关系感兴趣,但对于类型不同且资源较少的语言,仍然需要数据和实验。我们报道了对土耳其语浅层话语解析的最全面的调查,重点关注两个主要的子任务:话语关系实现类型的识别和显性和隐性关系的意义分类。这项工作是基于将预训练的语言模型(BERT)作为编码器进行微调,并使用基于神经网络的分类器对编码数据进行分类的方法。我们首先确定了在给定文本中存在的话语关系实现类型(如果有的话)。然后,我们继续对已识别的显性和隐性关系进行意义分类。除了在土耳其语语语篇库(TDB 1.2)的测试集上进行域内实验外,我们还报告了我们的模型的域外性能,以评估其泛化能力,使用TED多语言语篇库的土耳其文部分。最后,我们通过跨语言实验探讨了多语言数据聚合对关系实现类型分类的影响。结果表明,尽管TDB1.2的规模有限,但我们的模型表现相对较好,并且在检测话语关系实现的类型方面存在特定于语言的方面。我们认为,鉴于TDB 1.2在总关系数方面是宾夕法尼亚大学话语树库的1/20,这些发现对于深入了解现代语言模型在类型不同的语言和低资源场景中的表现都很重要。
{"title":"Toward a shallow discourse parser for Turkish","authors":"Ferhat Kutlu, Deniz Zeyrek, Murathan Kurfali","doi":"10.1017/s1351324923000359","DOIUrl":"https://doi.org/10.1017/s1351324923000359","url":null,"abstract":"\u0000 One of the most interesting aspects of natural language is how texts cohere, which involves the pragmatic or semantic relations that hold between clauses (addition, cause-effect, conditional, similarity), referred to as discourse relations. A focus on the identification and classification of discourse relations appears as an imperative challenge to be resolved to support tasks such as text summarization, dialogue systems, and machine translation that need information above the clause level. Despite the recent interest in discourse relations in well-known languages such as English, data and experiments are still needed for typologically different and less-resourced languages. We report the most comprehensive investigation of shallow discourse parsing in Turkish, focusing on two main sub-tasks: identification of discourse relation realization types and the sense classification of explicit and implicit relations. The work is based on the approach of fine-tuning a pre-trained language model (BERT) as an encoder and classifying the encoded data with neural network-based classifiers. We firstly identify the discourse relation realization type that holds in a given text, if there is any. Then, we move on to the sense classification of the identified explicit and implicit relations. In addition to in-domain experiments on a held-out test set from the Turkish Discourse Bank (TDB 1.2), we also report the out-domain performance of our models in order to evaluate its generalization abilities, using the Turkish part of the TED Multilingual Discourse Bank. Finally, we explore the effect of multilingual data aggregation on the classification of relation realization type through a cross-lingual experiment. The results suggest that our models perform relatively well despite the limited size of the TDB 1.2 and that there are language-specific aspects of detecting the types of discourse relation realization. We believe that the findings are important both in providing insights regarding the performance of the modern language models in a typologically different language and in the low-resource scenario, given that the TDB 1.2 is 1/20th of the Penn Discourse TreeBank in terms of the number of total relations.","PeriodicalId":49143,"journal":{"name":"Natural Language Engineering","volume":null,"pages":null},"PeriodicalIF":2.5,"publicationDate":"2023-08-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49172642","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Natural Language Engineering
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1