首页 > 最新文献

Computational Linguistics最新文献

英文 中文
Dotless Arabic text for Natural Language Processing 用于自然语言处理的无点阿拉伯语文本
IF 9.3 2区 计算机科学 Pub Date : 2024-09-12 DOI: 10.1162/coli_a_00535
Maged S. Al-Shaibani, Irfan Ahmad
This paper introduces a novel representation of Arabic text as an alternative approach for Arabic NLP, inspired by the dotless script of ancient Arabic. We explored this representation through extensive analysis on various text corpora, differing in size and domain, and tokenized using multiple tokenization techniques. Furthermore, we examined the information density of this representation and compared it with the standard dotted Arabic text using text entropy analysis. Utilizing parallel corpora, we also drew comparisons between Arabic and English text analysis to gain additional insights. Our investigation extended to various upstream and downstream NLP tasks, including language modeling, text classification, sequence labeling, and machine translation, examining the implications of both the representations. Specifically, we performed seven different downstream tasks using various tokenization schemes comparing the standard dotted text with dotless Arabic text representations. The performances using both the representations were comparable across different tokenizations. However, dotless representation achieves these results with significant reduction in vocabulary sizes, and in some scenarios showing reduction of up to 50%. Additionally, we present a system that restores dots to the dotless Arabic text. This system is useful for tasks that require Arabic texts as output.
本文介绍了一种新颖的阿拉伯语文本表示法,作为阿拉伯语 NLP 的另一种方法,其灵感来自古代阿拉伯语的无点文字。我们通过对不同大小和领域的文本语料库进行广泛分析,并使用多种标记化技术对其进行标记,从而探索了这种表示法。此外,我们还研究了这种表示法的信息密度,并使用文本熵分析法将其与标准的阿拉伯文无点文本进行了比较。利用平行语料库,我们还对阿拉伯语和英语文本分析进行了比较,以获得更多见解。我们的研究扩展到各种上游和下游 NLP 任务,包括语言建模、文本分类、序列标注和机器翻译,以检查这两种表征的影响。具体来说,我们使用各种标记化方案执行了七项不同的下游任务,比较了标准带点文本和无点阿拉伯语文本表示法。在不同的标记化方案中,两种表示法的性能相当。然而,无点表示法在实现这些结果的同时,还显著减少了词汇量,在某些情况下,词汇量减少高达 50%。此外,我们还介绍了一种可在无点阿拉伯语文本中恢复点的系统。该系统适用于需要阿拉伯语文本作为输出的任务。
{"title":"Dotless Arabic text for Natural Language Processing","authors":"Maged S. Al-Shaibani, Irfan Ahmad","doi":"10.1162/coli_a_00535","DOIUrl":"https://doi.org/10.1162/coli_a_00535","url":null,"abstract":"This paper introduces a novel representation of Arabic text as an alternative approach for Arabic NLP, inspired by the dotless script of ancient Arabic. We explored this representation through extensive analysis on various text corpora, differing in size and domain, and tokenized using multiple tokenization techniques. Furthermore, we examined the information density of this representation and compared it with the standard dotted Arabic text using text entropy analysis. Utilizing parallel corpora, we also drew comparisons between Arabic and English text analysis to gain additional insights. Our investigation extended to various upstream and downstream NLP tasks, including language modeling, text classification, sequence labeling, and machine translation, examining the implications of both the representations. Specifically, we performed seven different downstream tasks using various tokenization schemes comparing the standard dotted text with dotless Arabic text representations. The performances using both the representations were comparable across different tokenizations. However, dotless representation achieves these results with significant reduction in vocabulary sizes, and in some scenarios showing reduction of up to 50%. Additionally, we present a system that restores dots to the dotless Arabic text. This system is useful for tasks that require Arabic texts as output.","PeriodicalId":49089,"journal":{"name":"Computational Linguistics","volume":"311 1","pages":""},"PeriodicalIF":9.3,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142224727","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Humans Learn Language from Situated Communicative Interactions. What about Machines? 人类从情景交流互动中学习语言。那么机器呢?
IF 9.3 2区 计算机科学 Pub Date : 2024-08-02 DOI: 10.1162/coli_a_00534
Katrien Beuls, Paul Van Eecke
Humans acquire their native languages by taking part in communicative interactions with their caregivers. These interactions are meaningful, intentional, and situated in their everyday environment. The situated and communicative nature of the interactions is essential to the language acquisition process, as language learners depend on clues provided by the communicative environment to make sense of the utterances they perceive. As such, the linguistic knowledge they build up is rooted in linguistic forms, their meaning, and their communicative function. When it comes to machines, the situated, communicative, and interactional aspects of language learning are often passed over. This applies in particular to today’s large language models (LLMs), where the input is predominantly text-based, and where the distribution of character groups or words serves as a basis for modeling the meaning of linguistic expressions. In this article, we argue that this design choice lies at the root of a number of important limitations, in particular regarding the data hungriness of the models, their limited ability to perform human-like logical and pragmatic reasoning, and their susceptibility to biases. At the same time, we make a case for an alternative approach that models how artificial agents can acquire linguistic structures by participating in situated communicative interactions. Through a selection of experiments, we show how the linguistic knowledge that is captured in the resulting models is of a fundamentally different nature than the knowledge captured by LLMs and argue that this change of perspective provides a promising path towards more human-like language processing in machines.
人类是通过与照料者的交流互动来获得母语的。这些互动是有意义的、有意的,而且是在他们的日常生活环境中进行的。互动的情景性和交际性对语言习得过程至关重要,因为语言学习者依靠交际环境提供的线索来理解他们所感知的话语。因此,他们积累的语言知识植根于语言形式、语言意义及其交际功能。说到机器,语言学习的情景、交际和互动方面往往被忽略了。这一点尤其适用于当今的大型语言模型(LLM),在这些模型中,输入主要是基于文本的,字符组或单词的分布是语言表达意义建模的基础。在本文中,我们认为这种设计选择是一系列重要局限性的根源所在,尤其是在模型的数据饥渴性、执行类人逻辑和语用推理的能力有限以及易受偏见影响等方面。与此同时,我们提出了另一种方法,即模拟人工代理如何通过参与情景交流互动来获得语言结构。通过一系列实验,我们展示了由此产生的模型所捕捉到的语言知识与 LLMs 所捕捉到的知识有着本质上的不同,并认为这种视角的改变为机器提供了一条通往更像人类的语言处理之路。
{"title":"Humans Learn Language from Situated Communicative Interactions. What about Machines?","authors":"Katrien Beuls, Paul Van Eecke","doi":"10.1162/coli_a_00534","DOIUrl":"https://doi.org/10.1162/coli_a_00534","url":null,"abstract":"Humans acquire their native languages by taking part in communicative interactions with their caregivers. These interactions are meaningful, intentional, and situated in their everyday environment. The situated and communicative nature of the interactions is essential to the language acquisition process, as language learners depend on clues provided by the communicative environment to make sense of the utterances they perceive. As such, the linguistic knowledge they build up is rooted in linguistic forms, their meaning, and their communicative function. When it comes to machines, the situated, communicative, and interactional aspects of language learning are often passed over. This applies in particular to today’s large language models (LLMs), where the input is predominantly text-based, and where the distribution of character groups or words serves as a basis for modeling the meaning of linguistic expressions. In this article, we argue that this design choice lies at the root of a number of important limitations, in particular regarding the data hungriness of the models, their limited ability to perform human-like logical and pragmatic reasoning, and their susceptibility to biases. At the same time, we make a case for an alternative approach that models how artificial agents can acquire linguistic structures by participating in situated communicative interactions. Through a selection of experiments, we show how the linguistic knowledge that is captured in the resulting models is of a fundamentally different nature than the knowledge captured by LLMs and argue that this change of perspective provides a promising path towards more human-like language processing in machines.","PeriodicalId":49089,"journal":{"name":"Computational Linguistics","volume":"27 1","pages":""},"PeriodicalIF":9.3,"publicationDate":"2024-08-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141881873","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Exceptions, Instantiations, and Overgeneralization: Insights into How Language Models Process Generics 例外、实例和过度泛化:洞察语言模型如何处理泛型
IF 9.3 2区 计算机科学 Pub Date : 2024-07-30 DOI: 10.1162/coli_a_00530
Emily Allaway, Chandra Bhagavatula, Jena D. Hwang, Kathleen McKeown, Sarah-Jane Leslie
Large language models (LLMs) have garnered a great deal of attention for their exceptional generative performance on commonsense and reasoning tasks. In this work, we investigate LLMs’ capabilities for generalization using a particularly challenging type of statement: generics. Generics express generalizations (e.g., birds can fly) but do so without explicit quantification. They are notable because they generalize over their instantiations (e.g., sparrows can fly) yet hold true even in the presence of exceptions (e.g., penguins do not). For humans, these generic generalization play a fundamental role in cognition, concept acquisition, and intuitive reasoning. We investigate how LLMs respond to and reason about generics. To this end, we first propose a framework grounded in pragmatics to automatically generate both exceptions and instantiations – collectively exemplars. We make use of focus – a pragmatic phenomenon that highlights meaning-bearing elements in a sentence – to capture the full range of interpretations of generics across different contexts of use. This allows us to derive precise logical definitions for exemplars and operationalize them to automatically generate exemplars from LLMs. Using our system, we generate a dataset of ∼370k exemplars across ∼17k generics and conduct a human validation of a sample of the generated data. We use our final generated dataset to investigate how LLMs’ reason about generics. Humans have a documented tendency to conflate universally quantified statements (e.g., all birds can fly) with generics. Therefore, we probe whether LLMs exhibit similar overgeneralization behavior in terms of quantification and in property inheritance. We find that LLMs do show evidence of overgeneralization, although they sometimes struggle to reason about exceptions. Furthermore, we find that LLMs may exhibit similar non-logical behavior to humans when considering property inheritance from generics.
大语言模型(LLM)因其在常识和推理任务中出色的生成性能而备受关注。在这项工作中,我们使用一种特别具有挑战性的语句类型--泛型--来研究 LLM 的泛化能力。泛型表达泛化(例如,鸟会飞),但没有明确的量化。它们之所以值得注意,是因为它们对其实例(如麻雀会飞)进行了概括,但即使存在例外情况(如企鹅不会飞),它们也仍然成立。对于人类来说,这些一般概括在认知、概念获取和直觉推理中发挥着根本性的作用。我们将研究 LLMs 如何对泛型做出反应并进行推理。为此,我们首先提出了一个以语用学为基础的框架,用于自动生成例外和实例--统称为范例。我们利用重点--一种突出句子中含意义元素的语用现象--来捕捉属词在不同使用语境中的各种解释。这样,我们就能为示例推导出精确的逻辑定义,并将其操作化,从而从 LLM 自动生成示例。利用我们的系统,我们生成了一个包含 ∼37 万个示例的数据集,涉及 ∼17 万个属词,并对生成的数据样本进行了人工验证。我们使用最终生成的数据集来研究 LLMs 如何推理类属。有文献记载,人类倾向于将普遍量化的陈述(例如,所有的鸟都会飞)与类属混为一谈。因此,我们探究 LLMs 在量化和属性继承方面是否表现出类似的过度泛化行为。我们发现,LLMs 确实表现出了过度泛化的迹象,尽管它们有时在推理例外情况时会遇到困难。此外,我们还发现,在考虑从属类继承属性时,LLMs 可能会表现出与人类类似的非逻辑行为。
{"title":"Exceptions, Instantiations, and Overgeneralization: Insights into How Language Models Process Generics","authors":"Emily Allaway, Chandra Bhagavatula, Jena D. Hwang, Kathleen McKeown, Sarah-Jane Leslie","doi":"10.1162/coli_a_00530","DOIUrl":"https://doi.org/10.1162/coli_a_00530","url":null,"abstract":"Large language models (LLMs) have garnered a great deal of attention for their exceptional generative performance on commonsense and reasoning tasks. In this work, we investigate LLMs’ capabilities for generalization using a particularly challenging type of statement: generics. Generics express generalizations (e.g., birds can fly) but do so without explicit quantification. They are notable because they generalize over their instantiations (e.g., sparrows can fly) yet hold true even in the presence of exceptions (e.g., penguins do not). For humans, these generic generalization play a fundamental role in cognition, concept acquisition, and intuitive reasoning. We investigate how LLMs respond to and reason about generics. To this end, we first propose a framework grounded in pragmatics to automatically generate both exceptions and instantiations – collectively exemplars. We make use of focus – a pragmatic phenomenon that highlights meaning-bearing elements in a sentence – to capture the full range of interpretations of generics across different contexts of use. This allows us to derive precise logical definitions for exemplars and operationalize them to automatically generate exemplars from LLMs. Using our system, we generate a dataset of ∼370k exemplars across ∼17k generics and conduct a human validation of a sample of the generated data. We use our final generated dataset to investigate how LLMs’ reason about generics. Humans have a documented tendency to conflate universally quantified statements (e.g., all birds can fly) with generics. Therefore, we probe whether LLMs exhibit similar overgeneralization behavior in terms of quantification and in property inheritance. We find that LLMs do show evidence of overgeneralization, although they sometimes struggle to reason about exceptions. Furthermore, we find that LLMs may exhibit similar non-logical behavior to humans when considering property inheritance from generics.","PeriodicalId":49089,"journal":{"name":"Computational Linguistics","volume":"7 1","pages":""},"PeriodicalIF":9.3,"publicationDate":"2024-07-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141864212","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Usage-based grammar induction from minimal cognitive principles 从最基本的认知原则出发,进行基于使用的语法归纳
IF 9.3 2区 计算机科学 Pub Date : 2024-07-30 DOI: 10.1162/coli_a_00528
Anna Jon-And, Jérôme Michaud
This study explores the cognitive mechanisms underlying human language acquisition through grammar induction by a minimal cognitive architecture, with a short and flexible sequence memory as its most central feature. We use reinforcement learning for the task of identifying sentences in a stream of words from artificial languages. Results demonstrate the model’s ability to identify frequent and informative multi-word chunks, reproducing characteristics of natural language acquisition. The model successfully navigates varying degrees of linguistic complexity, exposing efficient adaptation to combinatorial challenges through the reuse of sequential patterns. The emergence of parsimonious tree structures suggests an optimization for the sentence identification task, balancing economy and information. The cognitive architecture reflects aspects of human memory systems and decision-making processes, enhancing its cognitive plausibility. While the model exhibits limitations in generalization and semantic representation, its minimalist nature offers insights into some fundamental mechanisms of language learning. Our study demonstrates the power of this simple architecture and stresses the importance of sequence memory in language learning. Since other animals do not seem to have faithful sequence memory, this may be a key to understanding why only humans have developed complex languages.
本研究探讨了人类通过语法归纳获得语言的认知机制,这种认知结构以简短灵活的序列记忆为最核心特征。我们使用强化学习来完成从人工语言的词流中识别句子的任务。结果表明,该模型有能力识别频繁出现且信息量大的多词块,再现了自然语言习得的特点。该模型成功地驾驭了不同程度的语言复杂性,通过重复使用顺序模式有效地适应了组合挑战。解析树结构的出现表明,句子识别任务在经济性和信息量之间实现了优化。认知架构反映了人类记忆系统和决策过程的方方面面,增强了其认知合理性。虽然该模型在泛化和语义表征方面存在局限性,但其简约的性质为语言学习的一些基本机制提供了启示。我们的研究证明了这种简单结构的力量,并强调了序列记忆在语言学习中的重要性。由于其他动物似乎没有忠实的序列记忆,这可能是理解为什么只有人类发展出复杂语言的关键。
{"title":"Usage-based grammar induction from minimal cognitive principles","authors":"Anna Jon-And, Jérôme Michaud","doi":"10.1162/coli_a_00528","DOIUrl":"https://doi.org/10.1162/coli_a_00528","url":null,"abstract":"This study explores the cognitive mechanisms underlying human language acquisition through grammar induction by a minimal cognitive architecture, with a short and flexible sequence memory as its most central feature. We use reinforcement learning for the task of identifying sentences in a stream of words from artificial languages. Results demonstrate the model’s ability to identify frequent and informative multi-word chunks, reproducing characteristics of natural language acquisition. The model successfully navigates varying degrees of linguistic complexity, exposing efficient adaptation to combinatorial challenges through the reuse of sequential patterns. The emergence of parsimonious tree structures suggests an optimization for the sentence identification task, balancing economy and information. The cognitive architecture reflects aspects of human memory systems and decision-making processes, enhancing its cognitive plausibility. While the model exhibits limitations in generalization and semantic representation, its minimalist nature offers insights into some fundamental mechanisms of language learning. Our study demonstrates the power of this simple architecture and stresses the importance of sequence memory in language learning. Since other animals do not seem to have faithful sequence memory, this may be a key to understanding why only humans have developed complex languages.","PeriodicalId":49089,"journal":{"name":"Computational Linguistics","volume":"74 1","pages":""},"PeriodicalIF":9.3,"publicationDate":"2024-07-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141872882","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
From Form(s) to Meaning: Probing the Semantic Depths of Language Models Using Multisense Consistency 从形式到意义:利用多义一致性探究语言模型的语义深度
IF 9.3 2区 计算机科学 Pub Date : 2024-07-30 DOI: 10.1162/coli_a_00529
Xenia Ohmer, Elia Bruni, Dieuwke Hupkes
The staggering pace with which the capabilities of large language models (LLMs) are increasing, as measured by a range of commonly used natural language understanding (NLU) benchmarks, raises many questions regarding what “understanding” means for a language model and how it compares to human understanding. This is especially true since many LLMs are exclusively trained on text, casting doubt on whether their stellar benchmark performances are reflective of a true understanding of the problems represented by these benchmarks, or whether LLMs simply excel at uttering textual forms that correlate with what someone who understands the problem would say. In this philosophically inspired work, we aim to create some separation between form and meaning, with a series of tests that leverage the idea that world understanding should be consistent across presentational modes — inspired by Fregean senses — of the same meaning. Specifically, we focus on consistency across languages as well as paraphrases. Taking GPT-3.5 as our object of study, we evaluate multisense consistency across five different languages and various tasks. We start the evaluation in a controlled setting, asking the model for simple facts, and then proceed with an evaluation on four popular NLU benchmarks. We find that the model’s multisense consistency is lacking and run several follow-up analyses to verify that this lack of consistency is due to a sense-dependent task understanding. We conclude that, in this aspect, the understanding of LLMs is still quite far from being consistent and human-like, and deliberate on how this impacts their utility in the context of learning about human language and understanding.
以一系列常用的自然语言理解(NLU)基准来衡量,大型语言模型(LLM)的能力正在以惊人的速度增长,这就引发了许多问题:对于语言模型来说,"理解 "意味着什么?尤其是许多 LLM 完全是在文本中训练出来的,这让人怀疑它们出色的基准性能是否反映了对这些基准所代表的问题的真正理解,或者 LLM 是否只是擅长说出与理解问题的人会说的话相关联的文本形式。在这项受哲学启发的工作中,我们旨在将形式和意义区分开来,通过一系列测试,利用对世界的理解应该在相同意义的呈现模式(受弗雷格感官启发)之间保持一致这一观点。具体来说,我们关注的是不同语言以及不同转述的一致性。以 GPT-3.5 为研究对象,我们对五种不同语言和各种任务的多义一致性进行了评估。我们首先在受控环境下进行评估,要求模型提供简单的事实,然后在四个流行的 NLU 基准上进行评估。我们发现该模型缺乏多义一致性,并进行了几项后续分析,以验证这种一致性的缺乏是由于对任务理解的意义依赖性造成的。我们得出的结论是,在这方面,LLMs 的理解离一致性和类人化还很远,并探讨了这如何影响它们在学习人类语言和理解方面的效用。
{"title":"From Form(s) to Meaning: Probing the Semantic Depths of Language Models Using Multisense Consistency","authors":"Xenia Ohmer, Elia Bruni, Dieuwke Hupkes","doi":"10.1162/coli_a_00529","DOIUrl":"https://doi.org/10.1162/coli_a_00529","url":null,"abstract":"The staggering pace with which the capabilities of large language models (LLMs) are increasing, as measured by a range of commonly used natural language understanding (NLU) benchmarks, raises many questions regarding what “understanding” means for a language model and how it compares to human understanding. This is especially true since many LLMs are exclusively trained on text, casting doubt on whether their stellar benchmark performances are reflective of a true understanding of the problems represented by these benchmarks, or whether LLMs simply excel at uttering textual forms that correlate with what someone who understands the problem would say. In this philosophically inspired work, we aim to create some separation between form and meaning, with a series of tests that leverage the idea that world understanding should be consistent across presentational modes — inspired by Fregean senses — of the same meaning. Specifically, we focus on consistency across languages as well as paraphrases. Taking GPT-3.5 as our object of study, we evaluate multisense consistency across five different languages and various tasks. We start the evaluation in a controlled setting, asking the model for simple facts, and then proceed with an evaluation on four popular NLU benchmarks. We find that the model’s multisense consistency is lacking and run several follow-up analyses to verify that this lack of consistency is due to a sense-dependent task understanding. We conclude that, in this aspect, the understanding of LLMs is still quite far from being consistent and human-like, and deliberate on how this impacts their utility in the context of learning about human language and understanding.","PeriodicalId":49089,"journal":{"name":"Computational Linguistics","volume":"73 1","pages":""},"PeriodicalIF":9.3,"publicationDate":"2024-07-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141864211","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Exploring temporal sensitivity in the brain using multi-timescale language models: an EEG decoding study 利用多时间尺度语言模型探索大脑的时间敏感性:脑电图解码研究
IF 9.3 2区 计算机科学 Pub Date : 2024-07-30 DOI: 10.1162/coli_a_00533
Sijie Ling, Alex Murphy, Alona Fyshe
The brain’s ability to perform complex computations at varying timescales is crucial, ranging from understanding single words to grasping the overarching narrative of a story. Recently, multi-timescale long short-term memory (MT-LSTM) models (Mahto et al. 2020; Jain et al. 2020) have been introduced, which use temporally-tuned parameters to induce sensitivity to different timescales of language processing (i.e. related to near/distant words). However, there has not been an exploration of the relation between such temporally-tuned information processing in MT-LSTMs and the brain’s language processing using high temporal resolution recording modalities, such as electroencephalography (EEG). To bridge this gap, we used an EEG dataset recorded while participants listened to Chapter 1 of “Alice in Wonderland” and trained ridge regression models to predict the temporally-tuned MT-LSTM embeddings from EEG responses. Our analysis reveals that EEG signals can be used to predict MT-LSTM embeddings across various timescales. For longer timescales, our models produced accurate predictions within an extended time window of ±2 s around word onset, while for shorter timescales, significant predictions are confined to a narrow window ranging from −180 ms to 790 ms. Intriguingly, we observed that short timescale information is not only processed in the vicinity of word onset but also at distant time points. These observations underscore the parallels and discrepancies between computational models and the neural mechanisms of the brain. As word embeddings are used more as in silico models of semantic representation in the brain, a more explicit consideration of timescale-dependent processing enables more targeted explorations of language processing in humans and machines.
大脑在不同时间尺度上进行复杂计算的能力至关重要,从理解单个单词到把握一个故事的总体叙事,不一而足。最近,多时间尺度长短时记忆(MT-LSTM)模型(Mahto 等人,2020 年;Jain 等人,2020 年)被引入,该模型使用时间调整参数来诱导对语言处理的不同时间尺度(即与近/远单词相关)的敏感性。然而,目前还没有人利用脑电图(EEG)等高时间分辨率记录模式探索 MT-LSTM 中这种时间调谐信息处理与大脑语言处理之间的关系。为了弥合这一差距,我们使用了参与者聆听《爱丽丝梦游仙境》第一章时记录的脑电图数据集,并训练了脊回归模型,以便从脑电图反应中预测时间调谐的 MT-LSTM 嵌入。我们的分析表明,脑电信号可用于预测不同时间尺度的 MT-LSTM 嵌入。对于较长的时间尺度,我们的模型在单词开始前后±2 秒的扩展时间窗口内产生了准确的预测,而对于较短的时间尺度,有意义的预测则局限在-180 毫秒到 790 毫秒的狭窄窗口内。耐人寻味的是,我们观察到短时标信息不仅在词开始的附近被处理,而且在较远的时间点也被处理。这些观察结果凸显了计算模型与大脑神经机制之间的相似之处和差异。随着词嵌入被更多地用作大脑语义表征的硅模型,更明确地考虑时标依赖性处理可以更有针对性地探索人类和机器的语言处理过程。
{"title":"Exploring temporal sensitivity in the brain using multi-timescale language models: an EEG decoding study","authors":"Sijie Ling, Alex Murphy, Alona Fyshe","doi":"10.1162/coli_a_00533","DOIUrl":"https://doi.org/10.1162/coli_a_00533","url":null,"abstract":"The brain’s ability to perform complex computations at varying timescales is crucial, ranging from understanding single words to grasping the overarching narrative of a story. Recently, multi-timescale long short-term memory (MT-LSTM) models (Mahto et al. 2020; Jain et al. 2020) have been introduced, which use temporally-tuned parameters to induce sensitivity to different timescales of language processing (i.e. related to near/distant words). However, there has not been an exploration of the relation between such temporally-tuned information processing in MT-LSTMs and the brain’s language processing using high temporal resolution recording modalities, such as electroencephalography (EEG). To bridge this gap, we used an EEG dataset recorded while participants listened to Chapter 1 of “Alice in Wonderland” and trained ridge regression models to predict the temporally-tuned MT-LSTM embeddings from EEG responses. Our analysis reveals that EEG signals can be used to predict MT-LSTM embeddings across various timescales. For longer timescales, our models produced accurate predictions within an extended time window of ±2 s around word onset, while for shorter timescales, significant predictions are confined to a narrow window ranging from −180 ms to 790 ms. Intriguingly, we observed that short timescale information is not only processed in the vicinity of word onset but also at distant time points. These observations underscore the parallels and discrepancies between computational models and the neural mechanisms of the brain. As word embeddings are used more as in silico models of semantic representation in the brain, a more explicit consideration of timescale-dependent processing enables more targeted explorations of language processing in humans and machines.","PeriodicalId":49089,"journal":{"name":"Computational Linguistics","volume":"76 1","pages":""},"PeriodicalIF":9.3,"publicationDate":"2024-07-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141864207","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Perception of Phonological Assimilation by Neural Speech Recognition Models 神经语音识别模型对语音同化的感知
IF 9.3 2区 计算机科学 Pub Date : 2024-07-30 DOI: 10.1162/coli_a_00526
Charlotte Pouw, Marianne de Heer Kloots, Afra Alishahi, Willem Zuidema
Human listeners effortlessly compensate for phonological changes during speech perception, often unconsciously inferring the intended sounds. For example, listeners infer the underlying /n/ when hearing an utterance such as “clea[m] pan”, where [m] arises from place assimilation to the following labial [p]. This article explores how the neural speech recognition model Wav2Vec2 perceives assimilated sounds, and identifies the linguistic knowledge that is implemented by the model to compensate for assimilation during Automatic Speech Recognition (ASR). Using psycholinguistic stimuli, we systematically analyze how various linguistic context cues influence compensation patterns in the model’s output. Complementing these behavioral experiments, our probing experiments indicate that the model shifts its interpretation of assimilated sounds from their acoustic form to their underlying form in its final layers. Finally, our causal intervention experiments suggest that the model relies on minimal phonological context cues to accomplish this shift. These findings represent a step towards better understanding the similarities and differences in phonological processing between neural ASR models and humans.
在语音感知过程中,人类听者会毫不费力地对语音变化进行补偿,往往会不自觉地推断出想要表达的声音。例如,听者在听到 "clea[m] pan "这样的语音时会推断出潜在的/n/,其中的[m]是通过与后面的唇音[p]同化而产生的。本文探讨了神经语音识别模型 Wav2Vec2 如何感知同化音,并确定了该模型在自动语音识别(ASR)过程中用于补偿同化的语言知识。利用心理语言刺激,我们系统分析了各种语言语境线索如何影响模型输出中的补偿模式。与这些行为实验相辅相成的是,我们的探测实验表明,该模型在其最终层中对同化声音的解释从声音形式转向了底层形式。最后,我们的因果干预实验表明,该模型依赖于最小的语音语境线索来完成这一转变。这些发现为更好地理解神经 ASR 模型与人类在语音处理方面的异同迈出了一步。
{"title":"Perception of Phonological Assimilation by Neural Speech Recognition Models","authors":"Charlotte Pouw, Marianne de Heer Kloots, Afra Alishahi, Willem Zuidema","doi":"10.1162/coli_a_00526","DOIUrl":"https://doi.org/10.1162/coli_a_00526","url":null,"abstract":"Human listeners effortlessly compensate for phonological changes during speech perception, often unconsciously inferring the intended sounds. For example, listeners infer the underlying /n/ when hearing an utterance such as “clea[m] pan”, where [m] arises from place assimilation to the following labial [p]. This article explores how the neural speech recognition model Wav2Vec2 perceives assimilated sounds, and identifies the linguistic knowledge that is implemented by the model to compensate for assimilation during Automatic Speech Recognition (ASR). Using psycholinguistic stimuli, we systematically analyze how various linguistic context cues influence compensation patterns in the model’s output. Complementing these behavioral experiments, our probing experiments indicate that the model shifts its interpretation of assimilated sounds from their acoustic form to their underlying form in its final layers. Finally, our causal intervention experiments suggest that the model relies on minimal phonological context cues to accomplish this shift. These findings represent a step towards better understanding the similarities and differences in phonological processing between neural ASR models and humans.","PeriodicalId":49089,"journal":{"name":"Computational Linguistics","volume":"41 1","pages":""},"PeriodicalIF":9.3,"publicationDate":"2024-07-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141864209","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Decode, move and speak! Self-supervised learning of speech units, gestures, and sounds relationships using vocal imitation 解码、移动和说话!利用声音模仿自我监督学习语音单元、手势和声音关系
IF 9.3 2区 计算机科学 Pub Date : 2024-07-30 DOI: 10.1162/coli_a_00532
Marc-Antoine Georges, Marvin Lavechin, Jean-Luc Schwartz, Thomas Hueber
Speech learning encompasses mastering a complex motor system to produce speech sounds from articulatory gestures while simultaneously uncovering discrete units that provide entry to the linguistic system. Remarkably, children acquire these associations between speech sounds, articulatory gestures, and linguistic units in a weakly supervised manner, without the need for explicit labeling of auditory inputs or access to target articulatory gestures. This study uses self-supervised deep learning to investigate the respective roles of sounds, gestures, and linguistic units in speech acquisition and control. In a first experiment, we analysed the quantized representations learned by vector-quantized variational autoencoders (VQ-VAE) from ground truth acoustic and articulatory data using ABX tests. We show an interesting complementarity between acoustic and articulatory modalities that may help in the discovery of phonemes. In a second experiment, we introduce a computational agent that repeats auditory speech inputs by controlling a virtual vocal apparatus. This agent integrates an articulatory synthesizer capable of reproducing diverse speech stimuli from interpretable parameters, along with two internal models implementing the articulatory-to-acoustic (forward) and acoustic-to-articulatory (inverse) mapping, respectively. Additionally, two inductive biases are used to regularize the ill-posed acoustic-to-articulatory inverse mapping. In line with the first experiment, we explore the complementarity between the auditory input and the articulatory parameters inferred by the agent. We also evaluate the impact of discretizing auditory inputs using VQ-VAE. While the majority of the agent’s productions are intelligible (according to perceptual evaluations), our analysis highlights inconsistencies in the underlying articulatory trajectories. In particular, we show that the agent’s productions only partially reproduce the complementarity between the auditory and articulatory modalities observed in humans.
语音学习包括掌握复杂的运动系统,通过发音手势发出语音,同时发现进入语言系统的离散单元。值得注意的是,儿童是以弱监督的方式获得这些语音、发音手势和语言单位之间的关联的,而不需要对听觉输入或目标发音手势进行明确标记。本研究利用自我监督深度学习来研究声音、手势和语言单位在语音习得和控制中的各自作用。在第一个实验中,我们使用 ABX 测试分析了向量量化变异自动编码器(VQ-VAE)从地面真实声学和发音数据中学习到的量化表征。我们展示了声学和发音模式之间有趣的互补性,这可能有助于音素的发现。在第二个实验中,我们引入了一个计算代理,通过控制虚拟发声器官来重复听觉语音输入。该代理集成了一个发音合成器,能够根据可解释参数重现各种语音刺激,同时还集成了两个内部模型,分别实现发音到声学(正向)和声学到发音(逆向)映射。此外,还使用了两个归纳偏置来正则化问题严重的声-动逆映射。与第一个实验一样,我们探索了听觉输入与代理推断的发音参数之间的互补性。我们还评估了使用 VQ-VAE 将听觉输入离散化的影响。虽然大多数语音代理的发音是可理解的(根据知觉评估),但我们的分析突出了基本发音轨迹的不一致性。特别是,我们发现,语音代理的发音只能部分再现人类听觉和发音模式之间的互补性。
{"title":"Decode, move and speak! Self-supervised learning of speech units, gestures, and sounds relationships using vocal imitation","authors":"Marc-Antoine Georges, Marvin Lavechin, Jean-Luc Schwartz, Thomas Hueber","doi":"10.1162/coli_a_00532","DOIUrl":"https://doi.org/10.1162/coli_a_00532","url":null,"abstract":"Speech learning encompasses mastering a complex motor system to produce speech sounds from articulatory gestures while simultaneously uncovering discrete units that provide entry to the linguistic system. Remarkably, children acquire these associations between speech sounds, articulatory gestures, and linguistic units in a weakly supervised manner, without the need for explicit labeling of auditory inputs or access to target articulatory gestures. This study uses self-supervised deep learning to investigate the respective roles of sounds, gestures, and linguistic units in speech acquisition and control. In a first experiment, we analysed the quantized representations learned by vector-quantized variational autoencoders (VQ-VAE) from ground truth acoustic and articulatory data using ABX tests. We show an interesting complementarity between acoustic and articulatory modalities that may help in the discovery of phonemes. In a second experiment, we introduce a computational agent that repeats auditory speech inputs by controlling a virtual vocal apparatus. This agent integrates an articulatory synthesizer capable of reproducing diverse speech stimuli from interpretable parameters, along with two internal models implementing the articulatory-to-acoustic (forward) and acoustic-to-articulatory (inverse) mapping, respectively. Additionally, two inductive biases are used to regularize the ill-posed acoustic-to-articulatory inverse mapping. In line with the first experiment, we explore the complementarity between the auditory input and the articulatory parameters inferred by the agent. We also evaluate the impact of discretizing auditory inputs using VQ-VAE. While the majority of the agent’s productions are intelligible (according to perceptual evaluations), our analysis highlights inconsistencies in the underlying articulatory trajectories. In particular, we show that the agent’s productions only partially reproduce the complementarity between the auditory and articulatory modalities observed in humans.","PeriodicalId":49089,"journal":{"name":"Computational Linguistics","volume":"184 1","pages":""},"PeriodicalIF":9.3,"publicationDate":"2024-07-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141872884","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Can language models handle recursively nested grammatical structures? A case study on comparing models and humans 语言模型能否处理递归嵌套语法结构?比较模型和人类的案例研究
IF 9.3 2区 计算机科学 Pub Date : 2024-07-30 DOI: 10.1162/coli_a_00525
Andrew Lampinen
How should we compare the capabilities of language models (LMs) and humans? In this paper, I draw inspiration from comparative psychology to highlight challenges in these comparisons. I focus on a case study: processing of recursively nested grammatical structures. Prior work suggests that LMs cannot process these structures as reliably as humans can. However, the humans were provided with instructions and substantial training, while the LMs were evaluated zero-shot. I therefore match the evaluation more closely. Providing large LMs with a simple prompt—with substantially less content than the human training—allows the LMs to consistently outperform the human results, even in more deeply nested conditions than were tested with humans. Furthermore, the effects of prompting are robust to the particular structures and vocabulary used in the prompt. Finally, reanalyzing the existing human data suggests that the humans may not perform above chance at the difficult structures initially. Thus, large LMs may indeed process recursively nested grammatical structures as reliably as humans, when evaluated comparably. This case study highlights how discrepancies in the evaluation methods can confound comparisons of language models and humans. I conclude by reflecting on the broader challenge of comparing human and model capabilities, and highlight an important difference between evaluating cognitive models and foundation models.
我们应该如何比较语言模型(LM)和人类的能力?在本文中,我将从比较心理学中汲取灵感,强调这些比较中存在的挑战。我将重点放在一个案例研究上:递归嵌套语法结构的处理。先前的研究表明,低能儿不能像人类一样可靠地处理这些结构。然而,人类得到的是指导和大量的训练,而 LM 则是零距离评估。因此,我对评估进行了更密切的匹配。为大型 LM 提供一个简单的提示--其内容远远少于人类训练的内容--使得 LM 的结果始终优于人类的结果,即使在比人类测试的嵌套更深的条件下也是如此。此外,提示的效果对提示中使用的特定结构和词汇具有稳健性。最后,对现有人类数据的重新分析表明,人类在最初处理困难结构时的表现可能并不出乎意料。因此,在进行比较评估时,大型 LM 确实可以像人类一样可靠地处理递归嵌套语法结构。本案例研究强调了评估方法的差异会如何混淆语言模型与人类的比较。最后,我对比较人类和模型能力这一更广泛的挑战进行了反思,并强调了评估认知模型和基础模型之间的重要区别。
{"title":"Can language models handle recursively nested grammatical structures? A case study on comparing models and humans","authors":"Andrew Lampinen","doi":"10.1162/coli_a_00525","DOIUrl":"https://doi.org/10.1162/coli_a_00525","url":null,"abstract":"How should we compare the capabilities of language models (LMs) and humans? In this paper, I draw inspiration from comparative psychology to highlight challenges in these comparisons. I focus on a case study: processing of recursively nested grammatical structures. Prior work suggests that LMs cannot process these structures as reliably as humans can. However, the humans were provided with instructions and substantial training, while the LMs were evaluated zero-shot. I therefore match the evaluation more closely. Providing large LMs with a simple prompt—with substantially less content than the human training—allows the LMs to consistently outperform the human results, even in more deeply nested conditions than were tested with humans. Furthermore, the effects of prompting are robust to the particular structures and vocabulary used in the prompt. Finally, reanalyzing the existing human data suggests that the humans may not perform above chance at the difficult structures initially. Thus, large LMs may indeed process recursively nested grammatical structures as reliably as humans, when evaluated comparably. This case study highlights how discrepancies in the evaluation methods can confound comparisons of language models and humans. I conclude by reflecting on the broader challenge of comparing human and model capabilities, and highlight an important difference between evaluating cognitive models and foundation models.","PeriodicalId":49089,"journal":{"name":"Computational Linguistics","volume":"74 1","pages":""},"PeriodicalIF":9.3,"publicationDate":"2024-07-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141872991","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Meaning beyond lexicality: Capturing Pseudoword Definitions with Language Models 词性之外的意义:用语言模型捕捉伪词定义
IF 9.3 2区 计算机科学 Pub Date : 2024-07-30 DOI: 10.1162/coli_a_00527
Andrea Gregor de Varda, Daniele Gatti, Marco Marelli, Fritz Günther
Pseudowords such as “knackets” or “spechy” – letter strings that are consistent with the orthotactical rules of a language but do not appear in its lexicon – are traditionally considered to be meaningless, and employed as such in empirical studies. However, recent studies that show specific semantic patterns associated with these words as well as semantic effects on human pseudoword processing have cast doubt on this view. While these studies suggest that pseudowords have meanings, they provide only extremely limited insight as to whether humans are able to ascribe explicit and declarative semantic content to unfamiliar word forms. In the present study, we employed an exploratory-confirmatory study design to examine this question. In a first exploratory study, we started from a pre-existing dataset of words and pseudowords alongside human-generated definitions for these items. Employing 18 different language models, we showed that the definitions actually produced for (pseudo)words were closer to their respective (pseudo)words than the definitions for the other items. Based on these initial results, we conducted a second, pre-registered, high-powered confirmatory study collecting a new, controlled set of (pseudo)word interpretations. This second study confirmed the results of the first one. Taken together, these findings support the idea that meaning construction is supported by a flexible form-to-meaning mapping system based on statistical regularities in the language environment that can accommodate novel lexical entries as soon as they are encountered.
像 "knackets "或 "spechy "这样的伪词--符合一种语言的正字法规则但不出现在其词典中的字母串--传统上被认为是没有意义的,在实证研究中也是这样使用的。然而,最近的研究显示了与这些词相关的特定语义模式,以及对人类伪词处理的语义影响,这些研究使人们对这种观点产生了怀疑。虽然这些研究表明伪词是有意义的,但对于人类是否能够将明确的陈述性语义内容赋予不熟悉的词形,这些研究只提供了极为有限的见解。在本研究中,我们采用了探索-确认研究设计来探讨这一问题。在第一项探索性研究中,我们从已有的单词和假词数据集以及人类为这些项目生成的定义入手。通过使用 18 种不同的语言模型,我们发现,与其他项目的定义相比,实际生成的(伪)词定义更接近各自的(伪)词。在这些初步结果的基础上,我们进行了第二次预先登记的高功率确认性研究,收集了一组新的、受控的(伪)词释义。第二次研究证实了第一次研究的结果。综上所述,这些研究结果支持这样一种观点,即意义建构是由一个灵活的形式-意义映射系统支持的,该系统基于语言环境中的统计规律性,能够在遇到新词条目时立即将其纳入其中。
{"title":"Meaning beyond lexicality: Capturing Pseudoword Definitions with Language Models","authors":"Andrea Gregor de Varda, Daniele Gatti, Marco Marelli, Fritz Günther","doi":"10.1162/coli_a_00527","DOIUrl":"https://doi.org/10.1162/coli_a_00527","url":null,"abstract":"Pseudowords such as “knackets” or “spechy” – letter strings that are consistent with the orthotactical rules of a language but do not appear in its lexicon – are traditionally considered to be meaningless, and employed as such in empirical studies. However, recent studies that show specific semantic patterns associated with these words as well as semantic effects on human pseudoword processing have cast doubt on this view. While these studies suggest that pseudowords have meanings, they provide only extremely limited insight as to whether humans are able to ascribe explicit and declarative semantic content to unfamiliar word forms. In the present study, we employed an exploratory-confirmatory study design to examine this question. In a first exploratory study, we started from a pre-existing dataset of words and pseudowords alongside human-generated definitions for these items. Employing 18 different language models, we showed that the definitions actually produced for (pseudo)words were closer to their respective (pseudo)words than the definitions for the other items. Based on these initial results, we conducted a second, pre-registered, high-powered confirmatory study collecting a new, controlled set of (pseudo)word interpretations. This second study confirmed the results of the first one. Taken together, these findings support the idea that meaning construction is supported by a flexible form-to-meaning mapping system based on statistical regularities in the language environment that can accommodate novel lexical entries as soon as they are encountered.","PeriodicalId":49089,"journal":{"name":"Computational Linguistics","volume":"55 1","pages":""},"PeriodicalIF":9.3,"publicationDate":"2024-07-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141864208","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Computational Linguistics
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1