首页 > 最新文献

Computational Linguistics最新文献

英文 中文
Improved N-Best Extraction with an Evaluation on Language Data 改进的基于语言数据评估的N-Best提取
IF 9.3 2区 计算机科学 Q1 Arts and Humanities Pub Date : 2021-12-16 DOI: 10.1162/coli_a_00427
Johanna Björklund, F. Drewes, Anna Jonsson
We show that a previously proposed algorithm for the N-best trees problem can be made more efficient by changing how it arranges and explores the search space. Given an integer N and a weighted tree automaton (wta) M over the tropical semiring, the algorithm computes N trees of minimal weight with respect to M. Compared with the original algorithm, the modifications increase the laziness of the evaluation strategy, which makes the new algorithm asymptotically more efficient than its predecessor. The algorithm is implemented in the software Betty, and compared to the state-of-the-art algorithm for extracting the N best runs, implemented in the software toolkit Tiburon. The data sets used in the experiments are wtas resulting from real-world natural language processing tasks, as well as artificially created wtas with varying degrees of nondeterminism. We find that Betty outperforms Tiburon on all tested data sets with respect to running time, while Tiburon seems to be the more memory-efficient choice.
我们表明,以前提出的N最佳树问题的算法可以通过改变其排列和探索搜索空间的方式来提高效率。在热带半环上给定一个整数N和一个加权树自动机(wta)M,该算法计算出N个相对于M具有最小权重的树。与原算法相比,这些修改增加了评估策略的惰性,使新算法在渐近上比前一算法更有效。该算法在软件Betty中实现,并与在软件工具包Tiburon中实现的用于提取N个最佳运行的最先进算法进行了比较。实验中使用的数据集是真实世界自然语言处理任务产生的wta,以及人工创建的具有不同程度不确定性的wta。我们发现Betty在所有测试的数据集上的运行时间都优于Tiburon,而Tiburon似乎是更具内存效率的选择。
{"title":"Improved N-Best Extraction with an Evaluation on Language Data","authors":"Johanna Björklund, F. Drewes, Anna Jonsson","doi":"10.1162/coli_a_00427","DOIUrl":"https://doi.org/10.1162/coli_a_00427","url":null,"abstract":"We show that a previously proposed algorithm for the N-best trees problem can be made more efficient by changing how it arranges and explores the search space. Given an integer N and a weighted tree automaton (wta) M over the tropical semiring, the algorithm computes N trees of minimal weight with respect to M. Compared with the original algorithm, the modifications increase the laziness of the evaluation strategy, which makes the new algorithm asymptotically more efficient than its predecessor. The algorithm is implemented in the software Betty, and compared to the state-of-the-art algorithm for extracting the N best runs, implemented in the software toolkit Tiburon. The data sets used in the experiments are wtas resulting from real-world natural language processing tasks, as well as artificially created wtas with varying degrees of nondeterminism. We find that Betty outperforms Tiburon on all tested data sets with respect to running time, while Tiburon seems to be the more memory-efficient choice.","PeriodicalId":55229,"journal":{"name":"Computational Linguistics","volume":null,"pages":null},"PeriodicalIF":9.3,"publicationDate":"2021-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43997136","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Revisiting the Boundary between ASR and NLU in the Age of Conversational Dialog Systems 重新审视会话对话系统时代的ASR与NLU之间的界限
IF 9.3 2区 计算机科学 Q1 Arts and Humanities Pub Date : 2021-12-10 DOI: 10.1162/coli_a_00430
Manaal Faruqui, Dilek Z. Hakkani-Tür
As more users across the world are interacting with dialog agents in their daily life, there is a need for better speech understanding that calls for renewed attention to the dynamics between research in automatic speech recognition (ASR) and natural language understanding (NLU). We briefly review these research areas and lay out the current relationship between them. In light of the observations we make in this article, we argue that (1) NLU should be cognizant of the presence of ASR models being used upstream in a dialog system’s pipeline, (2) ASR should be able to learn from errors found in NLU, (3) there is a need for end-to-end data sets that provide semantic annotations on spoken input, (4) there should be stronger collaboration between ASR and NLU research communities.
随着世界各地越来越多的用户在日常生活中与对话代理进行交互,需要更好的语音理解,这需要重新关注自动语音识别(ASR)和自然语言理解(NLU)研究之间的动态关系。我们简要回顾了这些研究领域,并阐述了它们之间目前的关系。根据我们在本文中所做的观察,我们认为(1)NLU应该认识到对话系统管道上游使用的ASR模型的存在,(2)ASR应该能够从NLU中发现的错误中学习,(3)需要端到端的数据集来提供口语输入的语义注释,(4)ASR和NLU研究社区之间应该加强合作。
{"title":"Revisiting the Boundary between ASR and NLU in the Age of Conversational Dialog Systems","authors":"Manaal Faruqui, Dilek Z. Hakkani-Tür","doi":"10.1162/coli_a_00430","DOIUrl":"https://doi.org/10.1162/coli_a_00430","url":null,"abstract":"As more users across the world are interacting with dialog agents in their daily life, there is a need for better speech understanding that calls for renewed attention to the dynamics between research in automatic speech recognition (ASR) and natural language understanding (NLU). We briefly review these research areas and lay out the current relationship between them. In light of the observations we make in this article, we argue that (1) NLU should be cognizant of the presence of ASR models being used upstream in a dialog system’s pipeline, (2) ASR should be able to learn from errors found in NLU, (3) there is a need for end-to-end data sets that provide semantic annotations on spoken input, (4) there should be stronger collaboration between ASR and NLU research communities.","PeriodicalId":55229,"journal":{"name":"Computational Linguistics","volume":null,"pages":null},"PeriodicalIF":9.3,"publicationDate":"2021-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49216893","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
To Augment or Not to Augment? A Comparative Study on Text Augmentation Techniques for Low-Resource NLP 增广还是不增广?低资源自然语言处理中文本增强技术的比较研究
IF 9.3 2区 计算机科学 Q1 Arts and Humanities Pub Date : 2021-11-18 DOI: 10.1162/coli_a_00425
Gözde Gül Şahin
Abstract Data-hungry deep neural networks have established themselves as the de facto standard for many NLP tasks, including the traditional sequence tagging ones. Despite their state-of-the-art performance on high-resource languages, they still fall behind their statistical counterparts in low-resource scenarios. One methodology to counterattack this problem is text augmentation, that is, generating new synthetic training data points from existing data. Although NLP has recently witnessed several new textual augmentation techniques, the field still lacks a systematic performance analysis on a diverse set of languages and sequence tagging tasks. To fill this gap, we investigate three categories of text augmentation methodologies that perform changes on the syntax (e.g., cropping sub-sentences), token (e.g., random word insertion), and character (e.g., character swapping) levels. We systematically compare the methods on part-of-speech tagging, dependency parsing, and semantic role labeling for a diverse set of language families using various models, including the architectures that rely on pretrained multilingual contextualized language models such as mBERT. Augmentation most significantly improves dependency parsing, followed by part-of-speech tagging and semantic role labeling. We find the experimented techniques to be effective on morphologically rich languages in general rather than analytic languages such as Vietnamese. Our results suggest that the augmentation techniques can further improve over strong baselines based on mBERT, especially for dependency parsing. We identify the character-level methods as the most consistent performers, while synonym replacement and syntactic augmenters provide inconsistent improvements. Finally, we discuss that the results most heavily depend on the task, language pair (e.g., syntactic-level techniques mostly benefit higher-level tasks and morphologically richer languages), and model type (e.g., token-level augmentation provides significant improvements for BPE, while character-level ones give generally higher scores for char and mBERT based models).
摘要渴望数据的深度神经网络已经成为许多NLP任务的事实标准,包括传统的序列标记任务。尽管它们在高资源语言上有着最先进的性能,但在低资源场景中仍然落后于统计上的同类语言。反击这个问题的一种方法是文本扩充,即从现有数据中生成新的合成训练数据点。尽管NLP最近出现了几种新的文本增强技术,但该领域仍然缺乏对各种语言和序列标记任务的系统性能分析。为了填补这一空白,我们研究了三类文本扩充方法,它们在语法(例如,裁剪子句)、标记(例如,随机单词插入)和字符(例如,字符交换)级别上进行更改。我们使用各种模型系统地比较了一组不同语系的词性标注、依赖解析和语义角色标注方法,包括依赖于预训练的多语言上下文语言模型(如mBERT)的架构。增强最显著地改进了依赖解析,其次是词性标记和语义角色标记。我们发现实验技术在形态丰富的语言上是有效的,而不是像越南语这样的分析语言。我们的结果表明,增强技术可以在基于mBERT的强基线上进一步改进,尤其是在依赖解析方面。我们将字符级方法确定为最一致的执行者,而同义词替换和语法增强提供了不一致的改进。最后,我们讨论了结果在很大程度上取决于任务、语言对(例如,句法级别的技术大多有利于更高级别的任务和形态上更丰富的语言)和模型类型(例如,标记级别的增强为BPE提供了显著的改进,而字符级别的增强通常为基于char和mBERT的模型提供了更高的分数)。
{"title":"To Augment or Not to Augment? A Comparative Study on Text Augmentation Techniques for Low-Resource NLP","authors":"Gözde Gül Şahin","doi":"10.1162/coli_a_00425","DOIUrl":"https://doi.org/10.1162/coli_a_00425","url":null,"abstract":"Abstract Data-hungry deep neural networks have established themselves as the de facto standard for many NLP tasks, including the traditional sequence tagging ones. Despite their state-of-the-art performance on high-resource languages, they still fall behind their statistical counterparts in low-resource scenarios. One methodology to counterattack this problem is text augmentation, that is, generating new synthetic training data points from existing data. Although NLP has recently witnessed several new textual augmentation techniques, the field still lacks a systematic performance analysis on a diverse set of languages and sequence tagging tasks. To fill this gap, we investigate three categories of text augmentation methodologies that perform changes on the syntax (e.g., cropping sub-sentences), token (e.g., random word insertion), and character (e.g., character swapping) levels. We systematically compare the methods on part-of-speech tagging, dependency parsing, and semantic role labeling for a diverse set of language families using various models, including the architectures that rely on pretrained multilingual contextualized language models such as mBERT. Augmentation most significantly improves dependency parsing, followed by part-of-speech tagging and semantic role labeling. We find the experimented techniques to be effective on morphologically rich languages in general rather than analytic languages such as Vietnamese. Our results suggest that the augmentation techniques can further improve over strong baselines based on mBERT, especially for dependency parsing. We identify the character-level methods as the most consistent performers, while synonym replacement and syntactic augmenters provide inconsistent improvements. Finally, we discuss that the results most heavily depend on the task, language pair (e.g., syntactic-level techniques mostly benefit higher-level tasks and morphologically richer languages), and model type (e.g., token-level augmentation provides significant improvements for BPE, while character-level ones give generally higher scores for char and mBERT based models).","PeriodicalId":55229,"journal":{"name":"Computational Linguistics","volume":null,"pages":null},"PeriodicalIF":9.3,"publicationDate":"2021-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49107971","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 17
Natural Language Processing and Computational Linguistics 自然语言处理与计算语言学
IF 9.3 2区 计算机科学 Q1 Arts and Humanities Pub Date : 2021-10-18 DOI: 10.1162/coli_a_00420
Jun'ichi Tsujii
away other aspects of information, such as the speaker’s empathy, distinction of old/new information, emphasis, and so on. To climb up the hierarchy led to loss of information in lower levels of representation. In Tsujii (1986), instead of mapping at the abstract level, I proposed “transfer based on a bundle of features of all the levels”, in which the transfer would refer to all levels of representation in the source language to produce a corresponding representation in the target language (Figure 4). Because different levels of representation require different geometrical structures (i.e., different tree structures), the realization of this proposal had to wait for development of a clear mathematical formulation of feature-based 6 IS (Interface Structure) is dependent on a specific language. In particular, unlike the interlingual approach, Eurotra did not assume language-independent leximemes in ISs so that the transfer phase between the two ISs (source and target ISs) was indispensable. See footnote 5. 711 D ow naded rom httpdirect.m it.edu/coli/article-p7/1979478/coli_a_00420.pdf by gest on 04 M arch 2022 Computational Linguistics Volume 47, Number 4 Figure 4 Description-based transfer (Tsujii 1986). representation with reentrancy, which allowed multiple levels (i.e., multiple trees) to be represented with their mutual relationships (see the next section). Another idea we adopted to systematize the transfer phase was recursive transfer (Nagao and Tsujii 1986), which was inspired by the idea of compositional semantics in CL. According to the views of linguists at the time, a language is an infinite set of expressions which, in turn, is defined by a finite set of rules. By applying this finite number of rules, one can generate infinitely many grammatical sentences of the language. Compositional semantics claimed that the meaning of a phrase was determined by combining the meanings of its subphrases, using the rules that generated the phrase. Compositional translation applied the same idea to translation. That is, the translation of a phrase was determined by combining the translations of its subphrases. In this way, translations of infinitely many sentences of the source language could be generated. Using the compositional translation approach, the translation of a sentence would be undertaken by recursively tracing a tree structure of a source sentence. The translation of a phrase would then be formulated by combining the translations of its subphrases. That is, translation would be constructed in a bottom up manner, from smaller units of translation to larger units. Furthermore, because the mapping of a phrase from the source to the target would be determined by the lexical head of the phrase, the lexical entry for the head word specified how to map a phrase to the target. In the MU project, we called this lexicondriven, recursive transfer (Nagao and Tsujii 1986) (Figure 5). 712 D ow naded rom httpdirect.m it.edu/coli/article-p7/1979478/coli_
忽略信息的其他方面,比如说话人的同理心、新旧信息的区分、强调等。向上层攀爬会导致较低层次表征的信息丢失。在Tsujii(1986)中,我提出了“基于一束所有层次特征的迁移”,而不是抽象层次的映射,这种迁移将参考源语言中所有层次的表征,从而在目标语言中产生相应的表征(图4)。由于不同层次的表征需要不同的几何结构(即不同的树形结构),这一建议的实现必须等待一个明确的数学公式的发展,基于6 IS(接口结构)是依赖于特定的语言。特别是,与语际方法不同的是,Eurotra没有假设语际翻译中存在与语言无关的词素,因此两个语际翻译(源语和目标语)之间的转移阶段是必不可少的。见脚注5。从httpdirect中导入。m it.edu/coli/article-p7/1979478/coli_a_00420.pdf by gest on 04 march 2022计算语言学第47卷,第4号图4基于描述的转移(Tsujii 1986)。具有重入性的表示,它允许用它们的相互关系来表示多个级别(即多个树)(参见下一节)。我们采用的另一个系统化迁移阶段的想法是递归迁移(Nagao and Tsujii 1986),其灵感来自CL中的组合语义。根据当时语言学家的观点,语言是一组无限的表达,而这些表达又由一组有限的规则来定义。通过应用这些有限数量的规则,人们可以生成无限多的语言语法句子。组合语义学声称,一个短语的意思是通过使用生成短语的规则,将其子短语的意思组合在一起来确定的。作文翻译将同样的思想应用于翻译。也就是说,一个短语的翻译是通过结合它的子短语的翻译来确定的。通过这种方式,可以生成无限多的源语言句子的翻译。使用组合翻译方法,句子的翻译将通过递归地跟踪源句子的树状结构来进行。然后,一个短语的翻译将通过结合其子短语的翻译来制定。也就是说,翻译将以自下而上的方式构建,从较小的翻译单位到较大的翻译单位。此外,由于短语从源到目标的映射将由短语的词头决定,因此词头词的词法条目指定了如何将短语映射到目标。在MU项目中,我们称之为词典驱动的递归传输(Nagao and Tsujii 1986)(图5)。图5词典驱动的递归结构转移(Nagao and Tsujii 1986)。图6词汇转移时的消歧。第一代机器翻译系统用目标表达式替换源表达式的顺序杂乱无章,与之相比,MU项目中的传递顺序是明确定义和系统执行的。教训。第二代MT系统的研究和开发得益于对CL的研究,允许比第一代MT系统更清晰地定义架构和设计原则。MU项目在四年的时间内成功交付了英语-日语和日语-英语MT系统。如果没有这些cl驱动的设计原则,我们不可能在这么短的时间内交付这些结果。然而,这两个学科的目标之间的差异也变得清晰起来。CL理论倾向于关注语言的特定方面(如形态学、句法、语义、话语等),而MT系统必须能够处理语言传达的信息的所有方面。如前所述,仅关注命题内容的层次结构并不会产生好的翻译。CL和NLP之间更严重的差异是对各种歧义的处理。消歧义是大多数NLP任务中最重要的挑战;它要求处理要消除歧义的表达式所在的上下文。换句话说,它需要理解上下文。从httpdirect中导入。m it.edu/coli/article-p7/1979478/coli_a_00420.pdf by gest on 04 m march 2022计算语言学第47卷,第4号消歧的典型例子如图6所示。 日语单词“asobu”的核心含义是“花时间而不参与任何特定的有用任务”,根据上下文可以翻译成“玩”、“玩得开心”、“花时间”、“闲逛”等等。考虑消除歧义的上下文与递归转换相矛盾,因为它需要处理更大的单元(即,要翻译的单元所在的上下文)。消歧义的性质使得递归传递过程变得笨拙。消除歧义也是分析阶段的一个主要问题,我将在下一节中讨论这个问题。语言学习或语言学的主要(虽然是隐藏的)普遍局限是,它倾向于将语言视为一个独立的、封闭的系统,避免了理解问题,而理解问题需要参考知识或非语言语境。然而,许多NLP任务,包括机器翻译,需要从知识和上下文方面理解或解释语言表达,这可能涉及其他输入方式,如视觉刺激、声音等。我将在关于未来研究的章节中讨论这一点。4. 语法形式与解析背景与动机。在我从事机器翻译研究的时候,CL有了新的发展,即基于特征的语法形式主义(Kriege 1993)。乔姆斯基(N. Chomsky)理论语言学中的转换语法在其早期阶段假定树形转换规则的顺序应用阶段将结构的两个层次,即深层结构和表层结构联系起来。MT社区也有类似的想法。他们假设在层次结构中向上爬将涉及规则应用的顺序阶段,这些阶段从一个级别的表示映射到下一个相邻级别的另一个表示。因为每一层次都需要自己的几何结构,所以不可能有统一的非程序性表示,使所有层次的表示并存。这种观点被基于特征的形式化的出现所改变,这些形式化使用有向无环图(dag)来允许重入。它不是从一个层映射到另一个层,而是以声明的方式描述不同表示层之间的相互关系。这种观点与我们基于描述的迁移的想法是一致的,它使用了一组不同级别的特征来进行迁移。此外,当时的一些语法形式主义强调词头的重要性。也就是说,所有级别的局部结构都受到短语词头的约束,这些约束被编码到lexicon中。这也符合我们的词典驱动的迁移。与此同时,CL的进一步重大发展发生了。也就是说,一些规模可观的树库项目,最著名的是宾夕法尼亚树库和兰开斯特/IBM树库,重新激活了语料库语言学,并开始对CL和NLP的研究产生重大影响(Marcus et al. 1994)。从NLP的角度来看,这是过度概括。乔姆斯基的理论语言学明确地回避了与解释有关的问题,将语言视为一个封闭的系统。其他的语言传统有着更轻松、开放的态度。注意,转换语法考虑了一组用于从深层结构生成表层结构的规则。另一方面,“层次攀升”分析模型考虑了一套规则,从表象层次揭示表象的抽象层次。方向相反。歧义在转换语法中不会引起问题。从httpdirect中导入。Tsujii自然语言处理和计算语言学的观点认为,大型树库的出现导致了消除歧义的强大工具(即概率模型)的发展。我们开始研究将这两种趋势结合起来,使分析阶段系统化,即基于基于特征的语法形式的解析。研究的贡献。人们经常声称,模糊性的出现是由于约束不足造成的。在“爬上层次”模型的分析阶段,较低层次的处理不能引用较高层次表示中的约束。这被认为是在层次结构上升的早期阶段模糊性组合爆炸的主要原因。句法分析不能引用语义约束,这意味着句法分析中的歧义将会爆发。另一方面,由于基于特征的形式化可以在单个统一框架中描述所有级别的约束,因此可以参考所有级别的约束,从而缩小可能的解释集合。然而,在实践中,实际的语法仍然非常缺乏约束。 这部分是因为我们没有表达语义和语用约束的有效方法。计算语言学家对将句法和语义联系起来的形式化声明方式感兴趣
{"title":"Natural Language Processing and Computational Linguistics","authors":"Jun'ichi Tsujii","doi":"10.1162/coli_a_00420","DOIUrl":"https://doi.org/10.1162/coli_a_00420","url":null,"abstract":"away other aspects of information, such as the speaker’s empathy, distinction of old/new information, emphasis, and so on. To climb up the hierarchy led to loss of information in lower levels of representation. In Tsujii (1986), instead of mapping at the abstract level, I proposed “transfer based on a bundle of features of all the levels”, in which the transfer would refer to all levels of representation in the source language to produce a corresponding representation in the target language (Figure 4). Because different levels of representation require different geometrical structures (i.e., different tree structures), the realization of this proposal had to wait for development of a clear mathematical formulation of feature-based 6 IS (Interface Structure) is dependent on a specific language. In particular, unlike the interlingual approach, Eurotra did not assume language-independent leximemes in ISs so that the transfer phase between the two ISs (source and target ISs) was indispensable. See footnote 5. 711 D ow naded rom httpdirect.m it.edu/coli/article-p7/1979478/coli_a_00420.pdf by gest on 04 M arch 2022 Computational Linguistics Volume 47, Number 4 Figure 4 Description-based transfer (Tsujii 1986). representation with reentrancy, which allowed multiple levels (i.e., multiple trees) to be represented with their mutual relationships (see the next section). Another idea we adopted to systematize the transfer phase was recursive transfer (Nagao and Tsujii 1986), which was inspired by the idea of compositional semantics in CL. According to the views of linguists at the time, a language is an infinite set of expressions which, in turn, is defined by a finite set of rules. By applying this finite number of rules, one can generate infinitely many grammatical sentences of the language. Compositional semantics claimed that the meaning of a phrase was determined by combining the meanings of its subphrases, using the rules that generated the phrase. Compositional translation applied the same idea to translation. That is, the translation of a phrase was determined by combining the translations of its subphrases. In this way, translations of infinitely many sentences of the source language could be generated. Using the compositional translation approach, the translation of a sentence would be undertaken by recursively tracing a tree structure of a source sentence. The translation of a phrase would then be formulated by combining the translations of its subphrases. That is, translation would be constructed in a bottom up manner, from smaller units of translation to larger units. Furthermore, because the mapping of a phrase from the source to the target would be determined by the lexical head of the phrase, the lexical entry for the head word specified how to map a phrase to the target. In the MU project, we called this lexicondriven, recursive transfer (Nagao and Tsujii 1986) (Figure 5). 712 D ow naded rom httpdirect.m it.edu/coli/article-p7/1979478/coli_","PeriodicalId":55229,"journal":{"name":"Computational Linguistics","volume":null,"pages":null},"PeriodicalIF":9.3,"publicationDate":"2021-10-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44009399","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Natural Language Processing: A Machine Learning Perspective by Yue Zhang and Zhiyang Teng 从机器学习的角度看自然语言处理
IF 9.3 2区 计算机科学 Q1 Arts and Humanities Pub Date : 2021-10-04 DOI: 10.1162/coli_r_00423
Julia Ive
{"title":"Natural Language Processing: A Machine Learning Perspective by Yue Zhang and Zhiyang Teng","authors":"Julia Ive","doi":"10.1162/coli_r_00423","DOIUrl":"https://doi.org/10.1162/coli_r_00423","url":null,"abstract":"","PeriodicalId":55229,"journal":{"name":"Computational Linguistics","volume":null,"pages":null},"PeriodicalIF":9.3,"publicationDate":"2021-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42794314","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Ethics Sheet for Automatic Emotion Recognition and Sentiment Analysis 情绪自动识别与情绪分析伦理表
IF 9.3 2区 计算机科学 Q1 Arts and Humanities Pub Date : 2021-09-17 DOI: 10.1162/coli_a_00433
Saif M. Mohammad
Abstract The importance and pervasiveness of emotions in our lives makes affective computing a tremendously important and vibrant line of work. Systems for automatic emotion recognition (AER) and sentiment analysis can be facilitators of enormous progress (e.g., in improving public health and commerce) but also enablers of great harm (e.g., for suppressing dissidents and manipulating voters). Thus, it is imperative that the affective computing community actively engage with the ethical ramifications of their creations. In this article, I have synthesized and organized information from AI Ethics and Emotion Recognition literature to present fifty ethical considerations relevant to AER. Notably, this ethics sheet fleshes out assumptions hidden in how AER is commonly framed, and in the choices often made regarding the data, method, and evaluation. Special attention is paid to the implications of AER on privacy and social groups. Along the way, key recommendations are made for responsible AER. The objective of the ethics sheet is to facilitate and encourage more thoughtfulness on why to automate, how to automate, and how to judge success well before the building of AER systems. Additionally, the ethics sheet acts as a useful introductory document on emotion recognition (complementing survey articles).
摘要情感在我们生活中的重要性和普遍性使情感计算成为一项极其重要和充满活力的工作。自动情绪识别(AER)和情绪分析系统可能是巨大进步的推动者(例如,在改善公共卫生和商业方面),但也可能造成巨大伤害(例如,镇压持不同政见者和操纵选民)。因此,情感计算社区必须积极参与他们创作的伦理影响。在这篇文章中,我综合并整理了人工智能伦理和情绪识别文献中的信息,提出了与AER相关的50个伦理考虑。值得注意的是,这份道德规范表充实了隐藏在AER通常是如何制定的,以及在数据、方法和评估方面经常做出的选择中的假设。特别关注AER对隐私和社会群体的影响。在此过程中,为负责任的AER提出了关键建议。道德规范表的目的是促进和鼓励在建立AER系统之前就为什么要自动化、如何自动化以及如何判断成功进行更多思考。此外,道德规范表是关于情绪识别的有用介绍性文件(补充调查文章)。
{"title":"Ethics Sheet for Automatic Emotion Recognition and Sentiment Analysis","authors":"Saif M. Mohammad","doi":"10.1162/coli_a_00433","DOIUrl":"https://doi.org/10.1162/coli_a_00433","url":null,"abstract":"Abstract The importance and pervasiveness of emotions in our lives makes affective computing a tremendously important and vibrant line of work. Systems for automatic emotion recognition (AER) and sentiment analysis can be facilitators of enormous progress (e.g., in improving public health and commerce) but also enablers of great harm (e.g., for suppressing dissidents and manipulating voters). Thus, it is imperative that the affective computing community actively engage with the ethical ramifications of their creations. In this article, I have synthesized and organized information from AI Ethics and Emotion Recognition literature to present fifty ethical considerations relevant to AER. Notably, this ethics sheet fleshes out assumptions hidden in how AER is commonly framed, and in the choices often made regarding the data, method, and evaluation. Special attention is paid to the implications of AER on privacy and social groups. Along the way, key recommendations are made for responsible AER. The objective of the ethics sheet is to facilitate and encourage more thoughtfulness on why to automate, how to automate, and how to judge success well before the building of AER systems. Additionally, the ethics sheet acts as a useful introductory document on emotion recognition (complementing survey articles).","PeriodicalId":55229,"journal":{"name":"Computational Linguistics","volume":null,"pages":null},"PeriodicalIF":9.3,"publicationDate":"2021-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48330840","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 34
Survey of Low-Resource Machine Translation 低资源机器翻译研究综述
IF 9.3 2区 计算机科学 Q1 Arts and Humanities Pub Date : 2021-09-01 DOI: 10.1162/coli_a_00446
B. Haddow, Rachel Bawden, Antonio Valerio Miceli Barone, Jindvrich Helcl, Alexandra Birch
Abstract We present a survey covering the state of the art in low-resource machine translation (MT) research. There are currently around 7,000 languages spoken in the world and almost all language pairs lack significant resources for training machine translation models. There has been increasing interest in research addressing the challenge of producing useful translation models when very little translated training data is available. We present a summary of this topical research field and provide a description of the techniques evaluated by researchers in several recent shared tasks in low-resource MT.
摘要本文对低资源机器翻译(MT)的研究现状进行了综述。目前世界上大约有7000种语言,几乎所有的语言对都缺乏训练机器翻译模型的重要资源。在翻译训练数据非常少的情况下,如何产生有用的翻译模型的研究越来越受到关注。我们对这一主题研究领域进行了总结,并对研究人员在最近的几个低资源机器翻译共享任务中评估的技术进行了描述。
{"title":"Survey of Low-Resource Machine Translation","authors":"B. Haddow, Rachel Bawden, Antonio Valerio Miceli Barone, Jindvrich Helcl, Alexandra Birch","doi":"10.1162/coli_a_00446","DOIUrl":"https://doi.org/10.1162/coli_a_00446","url":null,"abstract":"Abstract We present a survey covering the state of the art in low-resource machine translation (MT) research. There are currently around 7,000 languages spoken in the world and almost all language pairs lack significant resources for training machine translation models. There has been increasing interest in research addressing the challenge of producing useful translation models when very little translated training data is available. We present a summary of this topical research field and provide a description of the techniques evaluated by researchers in several recent shared tasks in low-resource MT.","PeriodicalId":55229,"journal":{"name":"Computational Linguistics","volume":null,"pages":null},"PeriodicalIF":9.3,"publicationDate":"2021-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43974444","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 70
The (Un)Suitability of Automatic Evaluation Metrics for Text Simplification 文本简化自动评价指标的(不)适用性
IF 9.3 2区 计算机科学 Q1 Arts and Humanities Pub Date : 2021-08-11 DOI: 10.1162/coli_a_00418
Fernando Alva-Manchego, Carolina Scarton, Lucia Specia
Abstract In order to simplify sentences, several rewriting operations can be performed, such as replacing complex words per simpler synonyms, deleting unnecessary information, and splitting long sentences. Despite this multi-operation nature, evaluation of automatic simplification systems relies on metrics that moderately correlate with human judgments on the simplicity achieved by executing specific operations (e.g., simplicity gain based on lexical replacements). In this article, we investigate how well existing metrics can assess sentence-level simplifications where multiple operations may have been applied and which, therefore, require more general simplicity judgments. For that, we first collect a new and more reliable data set for evaluating the correlation of metrics and human judgments of overall simplicity. Second, we conduct the first meta-evaluation of automatic metrics in Text Simplification, using our new data set (and other existing data) to analyze the variation of the correlation between metrics’ scores and human judgments across three dimensions: the perceived simplicity level, the system type, and the set of references used for computation. We show that these three aspects affect the correlations and, in particular, highlight the limitations of commonly used operation-specific metrics. Finally, based on our findings, we propose a set of recommendations for automatic evaluation of multi-operation simplifications, suggesting which metrics to compute and how to interpret their scores.
摘要为了简化句子,可以执行几种重写操作,例如用更简单的同义词替换复杂的单词、删除不必要的信息和拆分长句。尽管具有这种多操作性质,但自动简化系统的评估依赖于与人类对通过执行特定操作实现的简单性的判断适度相关的度量(例如,基于词汇替换的简单性增益)。在这篇文章中,我们研究了现有的指标在评估句子级别的简化时的效果,其中可能应用了多个操作,因此需要更一般的简单性判断。为此,我们首先收集了一个新的、更可靠的数据集,用于评估指标和人类对整体简单性的判断之间的相关性。其次,我们对文本简化中的自动指标进行了第一次元评估,使用我们的新数据集(和其他现有数据)来分析指标得分和人类判断之间的相关性在三个维度上的变化:感知的简单程度、系统类型和用于计算的参考集。我们表明,这三个方面会影响相关性,特别是突出了常用的特定操作指标的局限性。最后,基于我们的研究结果,我们提出了一组自动评估多操作简化的建议,建议计算哪些指标以及如何解释它们的分数。
{"title":"The (Un)Suitability of Automatic Evaluation Metrics for Text Simplification","authors":"Fernando Alva-Manchego, Carolina Scarton, Lucia Specia","doi":"10.1162/coli_a_00418","DOIUrl":"https://doi.org/10.1162/coli_a_00418","url":null,"abstract":"Abstract In order to simplify sentences, several rewriting operations can be performed, such as replacing complex words per simpler synonyms, deleting unnecessary information, and splitting long sentences. Despite this multi-operation nature, evaluation of automatic simplification systems relies on metrics that moderately correlate with human judgments on the simplicity achieved by executing specific operations (e.g., simplicity gain based on lexical replacements). In this article, we investigate how well existing metrics can assess sentence-level simplifications where multiple operations may have been applied and which, therefore, require more general simplicity judgments. For that, we first collect a new and more reliable data set for evaluating the correlation of metrics and human judgments of overall simplicity. Second, we conduct the first meta-evaluation of automatic metrics in Text Simplification, using our new data set (and other existing data) to analyze the variation of the correlation between metrics’ scores and human judgments across three dimensions: the perceived simplicity level, the system type, and the set of references used for computation. We show that these three aspects affect the correlations and, in particular, highlight the limitations of commonly used operation-specific metrics. Finally, based on our findings, we propose a set of recommendations for automatic evaluation of multi-operation simplifications, suggesting which metrics to compute and how to interpret their scores.","PeriodicalId":55229,"journal":{"name":"Computational Linguistics","volume":null,"pages":null},"PeriodicalIF":9.3,"publicationDate":"2021-08-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45077149","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 46
LFG Generation from Acyclic F-Structures is NP-Hard 从非循环F结构生成LFG是NP难的
IF 9.3 2区 计算机科学 Q1 Arts and Humanities Pub Date : 2021-08-11 DOI: 10.1162/coli_a_00419
Jürgen Wedekind, R. Kaplan
Abstract The universal generation problem for LFG grammars is the problem of determining whether a given grammar derives any terminal string with a given f-structure. It is known that this problem is decidable for acyclic f-structures. In this brief note, we show that for those f-structures the problem is nonetheless intractable. This holds even for grammars that are off-line parsable.
摘要LFG语法的通用生成问题是确定给定语法是否派生出具有给定f结构的任何终端字符串的问题。众所周知,这个问题对于非循环f结构是可判定的。在这个简短的说明中,我们表明,对于那些f结构,这个问题仍然是棘手的。这甚至适用于离线可解析的语法。
{"title":"LFG Generation from Acyclic F-Structures is NP-Hard","authors":"Jürgen Wedekind, R. Kaplan","doi":"10.1162/coli_a_00419","DOIUrl":"https://doi.org/10.1162/coli_a_00419","url":null,"abstract":"Abstract The universal generation problem for LFG grammars is the problem of determining whether a given grammar derives any terminal string with a given f-structure. It is known that this problem is decidable for acyclic f-structures. In this brief note, we show that for those f-structures the problem is nonetheless intractable. This holds even for grammars that are off-line parsable.","PeriodicalId":55229,"journal":{"name":"Computational Linguistics","volume":null,"pages":null},"PeriodicalIF":9.3,"publicationDate":"2021-08-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45076327","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Are Ellipses Important for Machine Translation? 省略号对机器翻译很重要吗?
IF 9.3 2区 计算机科学 Q1 Arts and Humanities Pub Date : 2021-08-05 DOI: 10.1162/coli_a_00414
Payal Khullar
Abstract This article describes an experiment to evaluate the impact of different types of ellipses discussed in theoretical linguistics on Neural Machine Translation (NMT), using English to Hindi/Telugu as source and target languages. Evaluation with manual methods shows that most of the errors made by Google NMT are located in the clause containing the ellipsis, the frequency of such errors is slightly more in Telugu than Hindi, and the translation adequacy shows improvement when ellipses are reconstructed with their antecedents. These findings not only confirm the importance of ellipses and their resolution for MT, but also hint toward a possible correlation between the translation of discourse devices like ellipses with the morphological incongruity of the source and target. We also observe that not all ellipses are translated poorly and benefit from reconstruction, advocating for a disparate treatment of different ellipses in MT research.
本文以英语为源语和目的语,研究了理论语言学中不同类型的省略号对神经机器翻译(NMT)的影响。手工方法评价表明,谷歌NMT的大部分错误位于包含省略号的子句中,泰卢固语的这种错误频率略高于印地语,使用前置词重构省略号后,翻译的充分性有所提高。这些发现不仅证实了省略号及其解析对机器翻译的重要性,而且暗示了省略号等话语装置的翻译可能与源语和译语的形态不一致有关。我们还观察到并非所有的省略都翻译得很差,并从重建中受益,主张在机器翻译研究中对不同的省略进行不同的处理。
{"title":"Are Ellipses Important for Machine Translation?","authors":"Payal Khullar","doi":"10.1162/coli_a_00414","DOIUrl":"https://doi.org/10.1162/coli_a_00414","url":null,"abstract":"Abstract This article describes an experiment to evaluate the impact of different types of ellipses discussed in theoretical linguistics on Neural Machine Translation (NMT), using English to Hindi/Telugu as source and target languages. Evaluation with manual methods shows that most of the errors made by Google NMT are located in the clause containing the ellipsis, the frequency of such errors is slightly more in Telugu than Hindi, and the translation adequacy shows improvement when ellipses are reconstructed with their antecedents. These findings not only confirm the importance of ellipses and their resolution for MT, but also hint toward a possible correlation between the translation of discourse devices like ellipses with the morphological incongruity of the source and target. We also observe that not all ellipses are translated poorly and benefit from reconstruction, advocating for a disparate treatment of different ellipses in MT research.","PeriodicalId":55229,"journal":{"name":"Computational Linguistics","volume":null,"pages":null},"PeriodicalIF":9.3,"publicationDate":"2021-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"64495124","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Computational Linguistics
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1