首页 > 最新文献

Computer Speech and Language最新文献

英文 中文
A knowledge-Aware NLP-Driven conversational model to detect deceptive contents on social media posts 检测社交媒体帖子中欺骗性内容的知识感知 NLP 会话模型
IF 3.1 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-10-18 DOI: 10.1016/j.csl.2024.101743
Deepak Kumar Jain , S. Neelakandan , Ankit Vidyarthi , Anand Mishra , Ahmed Alkhayyat
The widespread dissemination of deceptive content on social media presents a substantial challenge to preserving authenticity and trust. The epidemic growth of false news is due to the greater use of social media to transmit news, rather than conventional mass media such as newspapers, magazines, radio, and television. Humans' incapacity to differentiate among true and false facts exposes fake news as a threat to logical truth, democracy, journalism, and government credibility. Using combination of advanced methodologies, Deep learning (DL) methods, and Natural Language Processing (NLP) approaches, researchers and technology developers attempt to make robust systems proficient in discerning the subtle nuances that betray deceptive intent. Analysing conversational linguistic patterns of misleading data, these techniques’ purpose to progress the resilience of social platforms against the spread of deceptive content, eventually contributing to an additional informed and trustworthy online platform. This paper proposed a Knowledge-Aware NLP-Driven AlBiruni Earth Radius Optimization Algorithm with Deep Learning Tool for Enhanced Deceptive Content Detection (BER-DLEDCD) algorithm on Social Media. The purpose of the BER-DLEDCD system is to identify and classify the existence of deceptive content utilizing NLP with optimal DL model. In the BER-DLEDCD technique, data pre-processing takes place to change the input data into compatible format. Furthermore, the BER-DLEDCD approach applies hybrid DL technique encompassing Convolutional Neural Network with Long Short-Term Memory (CNN-LSTM) methodology for deceptive content detection. Moreover, the BER approach has been deployed to boost hyperparameter choice of the CNN-LSTM technique which leads to enhanced detection performance. The simulation outcome of the BER-DLEDCD system has been examined employing benchmark database. The extensive outcomes stated the BER-DLEDCD system achieved excellent performance with the accuracy of 94 %, 94.83 % precision, 94.30 % F-score with other recent approaches.
社交媒体上广泛传播的欺骗性内容对维护真实性和信任度构成了巨大挑战。虚假新闻的流行性增长是由于更多地使用社交媒体而不是报纸、杂志、广播和电视等传统大众媒体来传播新闻。人类无法区分真假事实,这暴露了假新闻对逻辑真理、民主、新闻业和政府公信力的威胁。研究人员和技术开发人员利用先进的方法论、深度学习(DL)方法和自然语言处理(NLP)方法,试图使强大的系统能够精通辨别暴露欺骗意图的细微差别。通过分析误导性数据的对话语言模式,这些技术旨在提高社交平台抵御欺骗性内容传播的能力,最终为建立一个更加知情和可信的在线平台做出贡献。本文提出了一种知识感知 NLP 驱动的 AlBiruni 地球半径优化算法与深度学习工具,用于增强社交媒体上的欺骗性内容检测(BER-DLEDCD)算法。BER-DLEDCD 系统的目的是利用具有最佳 DL 模型的 NLP 对存在的欺骗性内容进行识别和分类。在 BER-DLEDCD 技术中,需要对数据进行预处理,以便将输入数据转换为兼容格式。此外,BER-DLEDCD 方法还应用了混合 DL 技术,其中包括用于检测欺骗性内容的长短期记忆卷积神经网络(CNN-LSTM)方法。此外,还采用误码率方法来提高 CNN-LSTM 技术的超参数选择,从而提高检测性能。利用基准数据库检验了 BER-DLEDCD 系统的模拟结果。大量结果表明,与其他最新方法相比,BER-DLEDCD 系统取得了出色的性能,准确率达 94%,精确度达 94.83%,F-score 达 94.30%。
{"title":"A knowledge-Aware NLP-Driven conversational model to detect deceptive contents on social media posts","authors":"Deepak Kumar Jain ,&nbsp;S. Neelakandan ,&nbsp;Ankit Vidyarthi ,&nbsp;Anand Mishra ,&nbsp;Ahmed Alkhayyat","doi":"10.1016/j.csl.2024.101743","DOIUrl":"10.1016/j.csl.2024.101743","url":null,"abstract":"<div><div>The widespread dissemination of deceptive content on social media presents a substantial challenge to preserving authenticity and trust. The epidemic growth of false news is due to the greater use of social media to transmit news, rather than conventional mass media such as newspapers, magazines, radio, and television. Humans' incapacity to differentiate among true and false facts exposes fake news as a threat to logical truth, democracy, journalism, and government credibility. Using combination of advanced methodologies, Deep learning (DL) methods, and Natural Language Processing (NLP) approaches, researchers and technology developers attempt to make robust systems proficient in discerning the subtle nuances that betray deceptive intent. Analysing conversational linguistic patterns of misleading data, these techniques’ purpose to progress the resilience of social platforms against the spread of deceptive content, eventually contributing to an additional informed and trustworthy online platform. This paper proposed a Knowledge-Aware NLP-Driven AlBiruni Earth Radius Optimization Algorithm with Deep Learning Tool for Enhanced Deceptive Content Detection (BER-DLEDCD) algorithm on Social Media. The purpose of the BER-DLEDCD system is to identify and classify the existence of deceptive content utilizing NLP with optimal DL model. In the BER-DLEDCD technique, data pre-processing takes place to change the input data into compatible format. Furthermore, the BER-DLEDCD approach applies hybrid DL technique encompassing Convolutional Neural Network with Long Short-Term Memory (CNN-LSTM) methodology for deceptive content detection. Moreover, the BER approach has been deployed to boost hyperparameter choice of the CNN-LSTM technique which leads to enhanced detection performance. The simulation outcome of the BER-DLEDCD system has been examined employing benchmark database. The extensive outcomes stated the BER-DLEDCD system achieved excellent performance with the accuracy of 94 %, 94.83 % precision, 94.30 % F-score with other recent approaches.</div></div>","PeriodicalId":50638,"journal":{"name":"Computer Speech and Language","volume":"90 ","pages":"Article 101743"},"PeriodicalIF":3.1,"publicationDate":"2024-10-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142572766","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ECDG-DST: A dialogue state tracking model based on efficient context and domain guidance for smart dialogue systems ECDG-DST:基于高效语境和领域引导的对话状态跟踪模型,适用于智能对话系统
IF 3.1 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-10-17 DOI: 10.1016/j.csl.2024.101741
Meng Zhu , Xiaolong Xu
Dialogue state tracking (DST) is an important component of smart dialogue systems, with the goal of predicting the current dialogue state at conversation turn. However, most of the previous works had problems with storing a large amount of data and storing a large amount of noisy information when the conversation takes many turns. In addition, they also overlooked the effect of the domain in the task of dialogue state tracking. In this paper, we propose ECDG-DST 1 (A dialogue state tracking model based on efficient context and domain guidance) for smart dialogue systems, which preserves key information but retains less dialogue history, and masks the domain effectively in dialogue state tracking. Our model utilizes the efficient conversation context, the previous conversation state and the relationship between domains and slots to narrow the range of slots to be updated, and also limit the directions of values to reduce the generation of irrelevant words. The ECDG-DST model consists of four main components, including an encoder, a domain guide, an operation predictor, and a value generator. We conducted experiments on three popular task-oriented dialogue datasets, Wizard-of-Oz2.0, MultiWOZ2.0, and MultiWOZ2.1, and the empirical results demonstrate that ECDG-DST respectively improved joint goal accuracy by 0.45 % on Wizard-of-Oz2.0, 2.44 % on MultiWOZ2.0 and 2.05 % on MultiWOZ2.1 compared to the baselines. In addition, we analyzed the scope of the efficient context through experiments and validate the effectiveness of our proposed domain guide mechanism through ablation study.
对话状态跟踪(DST)是智能对话系统的重要组成部分,其目标是预测对话转折处的当前对话状态。然而,以往的大多数研究都存在对话多次转折时存储大量数据和存储大量噪声信息的问题。此外,他们还忽视了领域对对话状态跟踪任务的影响。在本文中,我们为智能对话系统提出了 ECDG-DST 1(基于高效语境和领域引导的对话状态跟踪模型),该模型保留了关键信息,但保留了较少的对话历史,并在对话状态跟踪中有效地屏蔽了领域。我们的模型利用高效对话上下文、之前的对话状态以及域和槽之间的关系来缩小需要更新的槽的范围,同时限制值的方向,以减少无关词的产生。ECDG-DST 模型由四个主要部分组成,包括编码器、域向导、操作预测器和值生成器。我们在三个流行的面向任务的对话数据集 Wizard-of-Oz2.0、MultiWOZ2.0 和 MultiWOZ2.1 上进行了实验,实证结果表明,与基线相比,ECDG-DST 在 Wizard-of-Oz2.0 上的联合目标准确率分别提高了 0.45%,在 MultiWOZ2.0 上提高了 2.44%,在 MultiWOZ2.1 上提高了 2.05%。此外,我们还通过实验分析了高效上下文的范围,并通过消融研究验证了我们提出的领域引导机制的有效性。
{"title":"ECDG-DST: A dialogue state tracking model based on efficient context and domain guidance for smart dialogue systems","authors":"Meng Zhu ,&nbsp;Xiaolong Xu","doi":"10.1016/j.csl.2024.101741","DOIUrl":"10.1016/j.csl.2024.101741","url":null,"abstract":"<div><div>Dialogue state tracking (DST) is an important component of smart dialogue systems, with the goal of predicting the current dialogue state at conversation turn. However, most of the previous works had problems with storing a large amount of data and storing a large amount of noisy information when the conversation takes many turns. In addition, they also overlooked the effect of the domain in the task of dialogue state tracking. In this paper, we propose ECDG-DST <sup>1</sup> (A dialogue state tracking model based on efficient context and domain guidance) for smart dialogue systems, which preserves key information but retains less dialogue history, and masks the domain effectively in dialogue state tracking. Our model utilizes the efficient conversation context, the previous conversation state and the relationship between domains and slots to narrow the range of slots to be updated, and also limit the directions of values to reduce the generation of irrelevant words. The ECDG-DST model consists of four main components, including an encoder, a domain guide, an operation predictor, and a value generator. We conducted experiments on three popular task-oriented dialogue datasets, Wizard-of-Oz2.0, MultiWOZ2.0, and MultiWOZ2.1, and the empirical results demonstrate that ECDG-DST respectively improved joint goal accuracy by 0.45 % on Wizard-of-Oz2.0, 2.44 % on MultiWOZ2.0 and 2.05 % on MultiWOZ2.1 compared to the baselines. In addition, we analyzed the scope of the efficient context through experiments and validate the effectiveness of our proposed domain guide mechanism through ablation study.</div></div>","PeriodicalId":50638,"journal":{"name":"Computer Speech and Language","volume":"90 ","pages":"Article 101741"},"PeriodicalIF":3.1,"publicationDate":"2024-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142534368","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Chinese Named Entity Recognition based on adaptive lexical weights 基于自适应词性权重的中文命名实体识别
IF 3.1 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-10-16 DOI: 10.1016/j.csl.2024.101735
Yaping Xu , Mengtao Ying , Kunyu Fang, Ruixing Ming
Currently, many researchers use weights to merge self-matched words obtained through dictionary matching in order to enhance the performance of Named Entity Recognition (NER). However, these studies overlook the relationship between words and sentences when calculating lexical weights, resulting in fused word information that often does not align with the intended meaning of the sentence. Addressing above issue and enhance the prediction performance, we propose an adaptive lexical weight approach for determining lexical weights. Given a sentence, we utilize an enhanced global attention mechanism to compute the correlation between self-matching words and sentences, thereby focusing attention on crucial words while disregarding unreliable portions. Experimental results demonstrate that our proposed model outperforms existing state-of-the-art methods for Chinese NER of MRSA, Weibo, and Resume datasets.
目前,许多研究人员使用权重来合并通过词典匹配获得的自匹配词,以提高命名实体识别(NER)的性能。然而,这些研究在计算词性权重时忽略了词与句子之间的关系,导致融合后的词信息往往与句子的原意不符。为了解决上述问题并提高预测性能,我们提出了一种用于确定词性权重的自适应词性权重方法。给定一个句子,我们利用增强的全局关注机制来计算自匹配词与句子之间的相关性,从而将注意力集中在关键词语上,而忽略不可靠的部分。实验结果表明,在 MRSA、微博和简历数据集的中文 NER 方面,我们提出的模型优于现有的一流方法。
{"title":"Chinese Named Entity Recognition based on adaptive lexical weights","authors":"Yaping Xu ,&nbsp;Mengtao Ying ,&nbsp;Kunyu Fang,&nbsp;Ruixing Ming","doi":"10.1016/j.csl.2024.101735","DOIUrl":"10.1016/j.csl.2024.101735","url":null,"abstract":"<div><div>Currently, many researchers use weights to merge self-matched words obtained through dictionary matching in order to enhance the performance of Named Entity Recognition (NER). However, these studies overlook the relationship between words and sentences when calculating lexical weights, resulting in fused word information that often does not align with the intended meaning of the sentence. Addressing above issue and enhance the prediction performance, we propose an adaptive lexical weight approach for determining lexical weights. Given a sentence, we utilize an enhanced global attention mechanism to compute the correlation between self-matching words and sentences, thereby focusing attention on crucial words while disregarding unreliable portions. Experimental results demonstrate that our proposed model outperforms existing state-of-the-art methods for Chinese NER of MRSA, Weibo, and Resume datasets.</div></div>","PeriodicalId":50638,"journal":{"name":"Computer Speech and Language","volume":"90 ","pages":"Article 101735"},"PeriodicalIF":3.1,"publicationDate":"2024-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142533843","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Measuring and implementing lexical alignment: A systematic literature review 衡量和实施词法调整:系统文献综述
IF 3.1 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-10-11 DOI: 10.1016/j.csl.2024.101731
Sumit Srivastava , Suzanna D. Wentzel , Alejandro Catala , Mariët Theune
Lexical Alignment is a phenomenon often found in human–human conversations, where the interlocutors converge during a conversation to use the same terms and phrases for the same underlying concepts. Alignment (linguistic) is a mechanism used by humans for better communication between interlocutors at various levels of linguistic knowledge and features, and one of them is lexical. The existing literature suggests that alignment has a significant role in communication between humans, and is also beneficial in human–agent communication. Various methods have been proposed in the past to measure lexical alignment in human–human conversations, and also to implement them in conversational agents. In this research, we carry out an analysis of the existing methods to measure lexical alignment and also dissect methods to implement it in a conversational agent for personalizing human–agent interactions. We propose a new set of criteria that such methods should meet and discuss the possible improvements that can be made to existing methods.
词汇对齐是人与人对话中经常出现的一种现象,即对话者在对话过程中趋于使用相同的术语和短语来表达相同的基本概念。对齐(语言)是人类为使对话者在不同语言知识和特征层面上更好地交流而使用的一种机制,词汇对齐就是其中之一。现有文献表明,对齐在人与人之间的交流中具有重要作用,在人机交流中也是有益的。过去,人们提出了各种方法来测量人与人对话中的词汇对齐情况,并将其应用到对话代理中。在本研究中,我们分析了现有的词性一致度测量方法,并剖析了在会话代理中实现词性一致度的方法,以实现人机交互的个性化。我们提出了一套此类方法应满足的新标准,并讨论了对现有方法可能做出的改进。
{"title":"Measuring and implementing lexical alignment: A systematic literature review","authors":"Sumit Srivastava ,&nbsp;Suzanna D. Wentzel ,&nbsp;Alejandro Catala ,&nbsp;Mariët Theune","doi":"10.1016/j.csl.2024.101731","DOIUrl":"10.1016/j.csl.2024.101731","url":null,"abstract":"<div><div>Lexical Alignment is a phenomenon often found in human–human conversations, where the interlocutors converge during a conversation to use the same terms and phrases for the same underlying concepts. Alignment (linguistic) is a mechanism used by humans for better communication between interlocutors at various levels of linguistic knowledge and features, and one of them is lexical. The existing literature suggests that alignment has a significant role in communication between humans, and is also beneficial in human–agent communication. Various methods have been proposed in the past to measure lexical alignment in human–human conversations, and also to implement them in conversational agents. In this research, we carry out an analysis of the existing methods to measure lexical alignment and also dissect methods to implement it in a conversational agent for personalizing human–agent interactions. We propose a new set of criteria that such methods should meet and discuss the possible improvements that can be made to existing methods.</div></div>","PeriodicalId":50638,"journal":{"name":"Computer Speech and Language","volume":"90 ","pages":"Article 101731"},"PeriodicalIF":3.1,"publicationDate":"2024-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142553127","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A hybrid approach to Natural Language Inference for the SICK dataset 针对 SICK 数据集的自然语言推理混合方法
IF 3.1 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-10-10 DOI: 10.1016/j.csl.2024.101736
Rodrigo Souza, Marcos Lopes
Natural Language Inference (NLI) can be described as the task of answering if a short text called Hypothesis (H) can be inferred from another text called Premise (P) (Poliak, 2020; Dagan et al., 2013). Affirmative answers are considered as semantic entailments and negative ones are either contradictions or semantically “neutral” statements. In the last three decades, many Natural Language Processing (NLP) methods have been put to use for solving this task. As it so happened to almost every other NLP task, Deep Learning (DL) techniques in general (and Transformer neural networks in particular) have been achieving the best results in this task in recent years, progressively increasing their outcomes when compared to classical, symbolic Knowledge Representation models in solving NLI.
Nevertheless, however successful DL models are in measurable results like accuracy and F-score, their outcomes are far from being explicable, and this is an undesirable feature specially in a task such as NLI, which is meant to deal with language understanding together with rational reasoning inherent to entailment and to contradiction judgements. It is therefore tempting to evaluate how more explainable models would perform in NLI and to compare their performance with DL models later on.
This paper puts forth a pipeline that we called IsoLex. It provides explainable, transparent NLP models for NLI. It has been tested on a partial version of the SICK corpus (Marelli, 2014) called SICK-CE, containing only the contradiction and the entailment pairs (4245 in total), thus leaving aside the neutral pairs, as an attempt to concentrate on unambiguous semantic relationships, which arguably favor the intelligibility of the results.
The pipeline consists of three serialized commonly used NLP models: first, an Isolation Forest module is used to filter off highly dissimilar Premise-Hypothesis pairs; second, a WordNet-based Lexical Relations module is employed to check whether the Premise and the Hypothesis textual contents are related to each other in terms of synonymy, hyperonymy, or holonymy; finally, similarities between Premise and Hypothesis texts are evaluated by a simple cosine similarity function based on Word2Vec embeddings.
IsoLex has achieved 92% accuracy and 94% F-1 on SICK-CE. This is close to SOTA models for this kind of task, such as RoBERTa with a 98% accuracy and 99% F-1 on the same dataset.
The small performance gap between IsoLex and SOTA DL models is largely compensated by intelligibility on every step of the proposed pipeline. At anytime it is possible to evaluate the role of similarity, lexical relatedness and so forth in the overall process of inference.
自然语言推理(NLI)可被描述为回答一个名为假设(H)的短文是否可以从另一个名为前提(P)的短文中推断出来的任务(Poliak,2020;Dagan 等人,2013)。肯定的答案被视为语义蕴含,否定的答案被视为矛盾或语义 "中性 "陈述。在过去的三十年中,许多自然语言处理(NLP)方法已被用于解决这一任务。正如几乎所有其他 NLP 任务一样,近年来,深度学习(DL)技术(尤其是 Transformer 神经网络)在这一任务中取得了最佳成果,与经典的符号化知识表示模型相比,其在解决 NLI 方面的成果逐步提高。然而,无论 DL 模型在准确率和 F 分数等可测量的结果上取得多大成功,它们的结果都远远无法解释,这对于像 NLI 这样旨在处理语言理解以及蕴含和矛盾判断所固有的理性推理的任务来说,尤其是不可取的。因此,我们很想评估更多可解释模型在 NLI 中的表现,并在以后将它们的表现与 DL 模型进行比较。它为 NLI 提供了可解释的、透明的 NLP 模型。它已在名为 SICK-CE 的 SICK 语料库(Marelli,2014 年)的部分版本上进行了测试,该语料库仅包含矛盾对和蕴涵对(共 4245 对),因此撇开了中性对,试图将注意力集中在无歧义的语义关系上,这可以说有利于结果的可理解性。该管道由三个序列化的常用 NLP 模型组成:首先,隔离森林模块用于过滤高度不相似的前提-假设对;其次,基于 WordNet 的词法关系模块用于检查前提和假设文本内容之间是否存在同义、超同义或全同关系;最后,通过基于 Word2Vec 嵌入的简单余弦相似度函数评估前提和假设文本之间的相似性。IsoLex 在 SICK-CE 上达到了 92% 的准确率和 94% 的 F-1。IsoLex 与 SOTA DL 模型在性能上的微小差距在很大程度上可以通过拟议管道每一步的可理解性来弥补。我们可以随时评估相似性、词汇相关性等因素在整个推理过程中的作用。
{"title":"A hybrid approach to Natural Language Inference for the SICK dataset","authors":"Rodrigo Souza,&nbsp;Marcos Lopes","doi":"10.1016/j.csl.2024.101736","DOIUrl":"10.1016/j.csl.2024.101736","url":null,"abstract":"<div><div>Natural Language Inference (NLI) can be described as the task of answering if a short text called <em>Hypothesis</em> (H) can be inferred from another text called <em>Premise</em> (P) (Poliak, 2020; Dagan et al., 2013). Affirmative answers are considered as semantic entailments and negative ones are either contradictions or semantically “neutral” statements. In the last three decades, many Natural Language Processing (NLP) methods have been put to use for solving this task. As it so happened to almost every other NLP task, Deep Learning (DL) techniques in general (and Transformer neural networks in particular) have been achieving the best results in this task in recent years, progressively increasing their outcomes when compared to classical, symbolic Knowledge Representation models in solving NLI.</div><div>Nevertheless, however successful DL models are in measurable results like accuracy and F-score, their outcomes are far from being explicable, and this is an undesirable feature specially in a task such as NLI, which is meant to deal with language understanding together with rational reasoning inherent to entailment and to contradiction judgements. It is therefore tempting to evaluate how more explainable models would perform in NLI and to compare their performance with DL models later on.</div><div>This paper puts forth a pipeline that we called IsoLex. It provides explainable, transparent NLP models for NLI. It has been tested on a partial version of the SICK corpus (Marelli, 2014) called SICK-CE, containing only the contradiction and the entailment pairs (4245 in total), thus leaving aside the neutral pairs, as an attempt to concentrate on unambiguous semantic relationships, which arguably favor the intelligibility of the results.</div><div>The pipeline consists of three serialized commonly used NLP models: first, an Isolation Forest module is used to filter off highly dissimilar Premise-Hypothesis pairs; second, a WordNet-based Lexical Relations module is employed to check whether the Premise and the Hypothesis textual contents are related to each other in terms of synonymy, hyperonymy, or holonymy; finally, similarities between Premise and Hypothesis texts are evaluated by a simple cosine similarity function based on Word2Vec embeddings.</div><div>IsoLex has achieved 92% accuracy and 94% F-1 on SICK-CE. This is close to SOTA models for this kind of task, such as RoBERTa with a 98% accuracy and 99% F-1 on the same dataset.</div><div>The small performance gap between IsoLex and SOTA DL models is largely compensated by intelligibility on every step of the proposed pipeline. At anytime it is possible to evaluate the role of similarity, lexical relatedness and so forth in the overall process of inference.</div></div>","PeriodicalId":50638,"journal":{"name":"Computer Speech and Language","volume":"90 ","pages":"Article 101736"},"PeriodicalIF":3.1,"publicationDate":"2024-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142441799","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DSTM: A transformer-based model with dynamic-static feature fusion in speech emotion recognition DSTM:语音情感识别中基于变压器的动静特征融合模型
IF 3.1 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-10-09 DOI: 10.1016/j.csl.2024.101733
Guowei Jin, Yunfeng Xu, Hong Kang, Jialin Wang, Borui Miao
With the support of multi-head attention, the Transformer shows remarkable results in speech emotion recognition. However, existing models still suffer from the inability to accurately locate important regions in semantic information at different time scales. To address this problem, we propose a Transformer-based network model for dynamic-static feature fusion, composed of a locally adaptive multi-head attention module and a global static attention module. The locally dynamic multi-head attention module adapts the attention window sizes and window centers of the different regions through speech samples and learnable parameters, enabling the model to adaptively discover and pay attention to valuable information embedded in speech. The global static attention module enables the model to use each element in the sequence fully and learn critical global feature information by establishing connections over the entire input sequence. We also use the data mixture training method to train our model and introduce the CENTER LOSS function to supervise the training of the model, which can better speed up the fitting speed of the model and alleviate the sample imbalance problem to a certain extent. This method achieved good performance on the IEMOCAP and MELD datasets, proving that our proposed model structure and method have better accuracy and robustness.
在多头注意力的支持下,Transformer 在语音情感识别方面取得了显著效果。然而,现有模型仍存在无法在不同时间尺度上准确定位语义信息重要区域的问题。为解决这一问题,我们提出了一种基于 Transformer 的动态-静态特征融合网络模型,该模型由局部自适应多头注意力模块和全局静态注意力模块组成。局部动态多头注意力模块通过语音样本和可学习参数调整不同区域的注意力窗口大小和窗口中心,使模型能够自适应地发现和关注语音中蕴含的有价值信息。全局静态注意力模块使模型能够充分利用序列中的每个元素,并通过在整个输入序列中建立连接来学习关键的全局特征信息。我们还采用了数据混合训练法来训练模型,并引入了 CENTER LOSS 函数来监督模型的训练,这可以更好地加快模型的拟合速度,并在一定程度上缓解样本不平衡问题。该方法在 IEMOCAP 和 MELD 数据集上取得了良好的性能,证明了我们提出的模型结构和方法具有更好的准确性和鲁棒性。
{"title":"DSTM: A transformer-based model with dynamic-static feature fusion in speech emotion recognition","authors":"Guowei Jin,&nbsp;Yunfeng Xu,&nbsp;Hong Kang,&nbsp;Jialin Wang,&nbsp;Borui Miao","doi":"10.1016/j.csl.2024.101733","DOIUrl":"10.1016/j.csl.2024.101733","url":null,"abstract":"<div><div>With the support of multi-head attention, the Transformer shows remarkable results in speech emotion recognition. However, existing models still suffer from the inability to accurately locate important regions in semantic information at different time scales. To address this problem, we propose a Transformer-based network model for dynamic-static feature fusion, composed of a locally adaptive multi-head attention module and a global static attention module. The locally dynamic multi-head attention module adapts the attention window sizes and window centers of the different regions through speech samples and learnable parameters, enabling the model to adaptively discover and pay attention to valuable information embedded in speech. The global static attention module enables the model to use each element in the sequence fully and learn critical global feature information by establishing connections over the entire input sequence. We also use the data mixture training method to train our model and introduce the CENTER LOSS function to supervise the training of the model, which can better speed up the fitting speed of the model and alleviate the sample imbalance problem to a certain extent. This method achieved good performance on the IEMOCAP and MELD datasets, proving that our proposed model structure and method have better accuracy and robustness.</div></div>","PeriodicalId":50638,"journal":{"name":"Computer Speech and Language","volume":"90 ","pages":"Article 101733"},"PeriodicalIF":3.1,"publicationDate":"2024-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142428314","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
FE-CFNER: Feature Enhancement-based approach for Chinese Few-shot Named Entity Recognition FE-CFNER:基于特征增强的中文少量命名实体识别方法
IF 3.1 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-10-09 DOI: 10.1016/j.csl.2024.101730
Sanhe Yang, Peichao Lai, Ruixiong Fang, Yanggeng Fu, Feiyang Ye, Yilei Wang
Although significant progress has been made in Chinese Named Entity Recognition (NER) methods based on deep learning, their performance often falls short in few-shot scenarios. Feature enhancement is considered a promising approach to address the issue of Chinese few-shot NER. However, traditional feature fusion methods tend to lead to the loss of important information and the integration of irrelevant information. Despite the benefits of incorporating BERT for improving entity recognition, its performance is limited when training data is insufficient. To tackle these challenges, this paper proposes a Feature Enhancement-based approach for Chinese Few-shot NER called FE-CFNER. FE-CFNER designs a double cross neural network to minimize information loss through the interaction of feature cross twice. Additionally, adaptive weights and a top-k mechanism are introduced to sparsify attention distributions, enabling the model to prioritize important information related to entities while excluding irrelevant information. To further enhance the quality of BERT embeddings, FE-CFNER employs a contrastive template for contrastive learning pre-training of BERT, enhancing BERT’s semantic understanding capability. We evaluate the proposed method on four sampled Chinese NER datasets: Weibo, Resume, Taobao, and Youku. Experimental results validate the effectiveness and superiority of FE-CFNER in Chinese few-shot NER tasks.
尽管基于深度学习的中文命名实体识别(NER)方法已经取得了重大进展,但这些方法在少量识别场景中的性能往往不尽如人意。特征增强被认为是解决中文少量特征识别问题的一种有前途的方法。然而,传统的特征融合方法往往会导致重要信息的丢失和无关信息的整合。尽管结合 BERT 有助于提高实体识别率,但当训练数据不足时,其性能就会受到限制。为了应对这些挑战,本文提出了一种基于特征增强的中文 "零镜头 "NER 方法,即 FE-CFNER。FE-CFNER 设计了一个双交叉神经网络,通过两次特征交叉的交互作用将信息损失降到最低。此外,FE-CFNER 还引入了自适应权重和 top-k 机制来稀疏注意力分布,使模型能够优先处理与实体相关的重要信息,同时排除无关信息。为了进一步提高 BERT 嵌入的质量,FE-CFNER 采用了对比模板对 BERT 进行对比学习预训练,从而增强了 BERT 的语义理解能力。我们在四个抽样中文 NER 数据集上对所提出的方法进行了评估:微博、简历、淘宝和优酷。实验结果验证了 FE-CFNER 在中文少量 NER 任务中的有效性和优越性。
{"title":"FE-CFNER: Feature Enhancement-based approach for Chinese Few-shot Named Entity Recognition","authors":"Sanhe Yang,&nbsp;Peichao Lai,&nbsp;Ruixiong Fang,&nbsp;Yanggeng Fu,&nbsp;Feiyang Ye,&nbsp;Yilei Wang","doi":"10.1016/j.csl.2024.101730","DOIUrl":"10.1016/j.csl.2024.101730","url":null,"abstract":"<div><div>Although significant progress has been made in Chinese Named Entity Recognition (NER) methods based on deep learning, their performance often falls short in few-shot scenarios. Feature enhancement is considered a promising approach to address the issue of Chinese few-shot NER. However, traditional feature fusion methods tend to lead to the loss of important information and the integration of irrelevant information. Despite the benefits of incorporating BERT for improving entity recognition, its performance is limited when training data is insufficient. To tackle these challenges, this paper proposes a Feature Enhancement-based approach for Chinese Few-shot NER called FE-CFNER. FE-CFNER designs a double cross neural network to minimize information loss through the interaction of feature cross twice. Additionally, adaptive weights and a top-<span><math><mi>k</mi></math></span> mechanism are introduced to sparsify attention distributions, enabling the model to prioritize important information related to entities while excluding irrelevant information. To further enhance the quality of BERT embeddings, FE-CFNER employs a contrastive template for contrastive learning pre-training of BERT, enhancing BERT’s semantic understanding capability. We evaluate the proposed method on four sampled Chinese NER datasets: Weibo, Resume, Taobao, and Youku. Experimental results validate the effectiveness and superiority of FE-CFNER in Chinese few-shot NER tasks.</div></div>","PeriodicalId":50638,"journal":{"name":"Computer Speech and Language","volume":"90 ","pages":"Article 101730"},"PeriodicalIF":3.1,"publicationDate":"2024-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142428313","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Spoofing countermeasure for fake speech detection using brute force features 利用暴力特征检测假语音的欺骗对策
IF 3.1 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-10-02 DOI: 10.1016/j.csl.2024.101732
Arsalan Rahman Mirza , Abdulbasit K. Al-Talabani
Due to the progress in deep learning technology, techniques that generate spoofed speech have significantly emerged. Such synthetic speech can be exploited for harmful purposes, like impersonation or disseminating false information. Researchers in the area investigate the useful features for spoof detection. This paper extensively investigates three problems in spoof detection in speech, namely, the imbalanced sample per class, which may negatively affect the performance of any detection models, the effect of the feature early and late fusion, and the analysis of unseen attacks on the model. Regarding the imbalanced issue, we have proposed two approaches (a Synthetic Minority Over Sampling Technique (SMOTE)-based and a Bootstrap-based model). We have used the OpenSMILE toolkit, to extract different feature sets, their results and early and late fusion of them have been investigated. The experiments are evaluated using the ASVspoof 2019 datasets which encompass synthetic, voice-conversion, and replayed speech samples. Additionally, Support Vector Machine (SVM) and Deep Neural Network (DNN) have been adopted in the classification. The outcomes from various test scenarios indicated that neither the imbalanced nature of the dataset nor a specific feature or their fusions outperformed the brute force version of the model as the best Equal Error Rate (EER) achieved by the Imbalance model is 6.67 % and 1.80 % for both Logical Access (LA) and Physical Access (PA) respectively.
由于深度学习技术的进步,生成欺骗性语音的技术已经大量涌现。这种合成语音可被用于有害目的,如冒名顶替或传播虚假信息。该领域的研究人员正在研究用于欺骗检测的有用特征。本文广泛研究了语音欺骗检测中的三个问题,即每类样本的不平衡(这可能会对任何检测模型的性能产生负面影响)、特征早期和晚期融合的影响以及对模型的未见攻击分析。关于不平衡问题,我们提出了两种方法(基于合成少数群体过度采样技术(SMOTE)的模型和基于 Bootstrap 的模型)。我们使用 OpenSMILE 工具包提取了不同的特征集,并对其结果以及早期和晚期融合进行了研究。实验使用 ASVspoof 2019 数据集进行评估,其中包括合成、语音转换和重放语音样本。此外,分类中还采用了支持向量机(SVM)和深度神经网络(DNN)。各种测试场景的结果表明,无论是数据集的不平衡性,还是特定特征或它们的融合,其性能都优于蛮力版本的模型,因为不平衡模型在逻辑访问(LA)和物理访问(PA)方面实现的最佳等错误率(EER)分别为 6.67 % 和 1.80 %。
{"title":"Spoofing countermeasure for fake speech detection using brute force features","authors":"Arsalan Rahman Mirza ,&nbsp;Abdulbasit K. Al-Talabani","doi":"10.1016/j.csl.2024.101732","DOIUrl":"10.1016/j.csl.2024.101732","url":null,"abstract":"<div><div>Due to the progress in deep learning technology, techniques that generate spoofed speech have significantly emerged. Such synthetic speech can be exploited for harmful purposes, like impersonation or disseminating false information. Researchers in the area investigate the useful features for spoof detection. This paper extensively investigates three problems in spoof detection in speech, namely, the imbalanced sample per class, which may negatively affect the performance of any detection models, the effect of the feature early and late fusion, and the analysis of unseen attacks on the model. Regarding the imbalanced issue, we have proposed two approaches (a Synthetic Minority Over Sampling Technique (SMOTE)-based and a Bootstrap-based model). We have used the OpenSMILE toolkit, to extract different feature sets, their results and early and late fusion of them have been investigated. The experiments are evaluated using the ASVspoof 2019 datasets which encompass synthetic, voice-conversion, and replayed speech samples. Additionally, Support Vector Machine (SVM) and Deep Neural Network (DNN) have been adopted in the classification. The outcomes from various test scenarios indicated that neither the imbalanced nature of the dataset nor a specific feature or their fusions outperformed the brute force version of the model as the best Equal Error Rate (EER) achieved by the Imbalance model is 6.67 % and 1.80 % for both Logical Access (LA) and Physical Access (PA) respectively.</div></div>","PeriodicalId":50638,"journal":{"name":"Computer Speech and Language","volume":"90 ","pages":"Article 101732"},"PeriodicalIF":3.1,"publicationDate":"2024-10-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142428363","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A language-agnostic model of child language acquisition 儿童语言习得的语言诊断模型
IF 3.1 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-09-30 DOI: 10.1016/j.csl.2024.101714
Louis Mahon , Omri Abend , Uri Berger , Katherine Demuth , Mark Johnson , Mark Steedman
This work reimplements a recent semantic bootstrapping child language acquisition (CLA) model, which was originally designed for English, and trains it to learn a new language: Hebrew. The model learns from pairs of utterances and logical forms as meaning representations, and acquires both syntax and word meanings simultaneously. The results show that the model mostly transfers to Hebrew, but that a number of factors, including the richer morphology in Hebrew, makes the learning slower and less robust. This suggests that a clear direction for future work is to enable the model to leverage the similarities between different word forms.
这项工作重新实施了一个最新的语义引导儿童语言习得(CLA)模型,该模型最初是为英语设计的,并训练它学习一种新的语言:希伯来语。该模型以成对的语句和逻辑形式作为意义表征进行学习,并同时掌握句法和词义。结果表明,该模型在很大程度上可以迁移到希伯来语中,但包括希伯来语中更丰富的词形在内的一些因素使得学习速度更慢,稳健性更差。这表明,未来工作的一个明确方向是使模型能够利用不同词形之间的相似性。
{"title":"A language-agnostic model of child language acquisition","authors":"Louis Mahon ,&nbsp;Omri Abend ,&nbsp;Uri Berger ,&nbsp;Katherine Demuth ,&nbsp;Mark Johnson ,&nbsp;Mark Steedman","doi":"10.1016/j.csl.2024.101714","DOIUrl":"10.1016/j.csl.2024.101714","url":null,"abstract":"<div><div>This work reimplements a recent semantic bootstrapping child language acquisition (CLA) model, which was originally designed for English, and trains it to learn a new language: Hebrew. The model learns from pairs of utterances and logical forms as meaning representations, and acquires both syntax and word meanings simultaneously. The results show that the model mostly transfers to Hebrew, but that a number of factors, including the richer morphology in Hebrew, makes the learning slower and less robust. This suggests that a clear direction for future work is to enable the model to leverage the similarities between different word forms.</div></div>","PeriodicalId":50638,"journal":{"name":"Computer Speech and Language","volume":"90 ","pages":"Article 101714"},"PeriodicalIF":3.1,"publicationDate":"2024-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142428315","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Evidence and Axial Attention Guided Document-level Relation Extraction 证据和轴向注意力引导的文档级关系提取
IF 3.1 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-09-28 DOI: 10.1016/j.csl.2024.101728
Jiawei Yuan , Hongyong Leng , Yurong Qian , Jiaying Chen , Mengnan Ma , Shuxiang Hou
Document-level Relation Extraction (DocRE) aims to identify semantic relations among multiple entity pairs within a document. Most of the previous DocRE methods take the entire document as input. However, for human annotators, a small subset of sentences in the document, namely the evidence, is sufficient to infer the relation of an entity pair. Additionally, a document usually contains multiple entities, and these entities are scattered throughout various location of the document. Previous models use these entities independently, ignore the global interdependency among relation triples. To handle above issues, we propose a novel framework EAAGRE (Evidence and Axial Attention Guided Relation Extraction). Firstly, we use human-annotated evidence labels to supervise the attention module of DocRE system, making the model pay attention to the evidence sentences rather than others. Secondly, we construct an entity-level relation matrix and use axial attention to capture the global interactions among entity pairs. By doing so, we further extract the relations that require multiple entity pairs for prediction. We conduct various experiments on DocRED and have some improvement compared to baseline models, verifying the effectiveness of our model.
文档级关系提取(DocRE)旨在识别文档中多个实体对之间的语义关系。以前的 DocRE 方法大多将整个文档作为输入。然而,对于人类注释者来说,文档中的一小部分句子(即证据)就足以推断出实体对之间的关系。此外,文档通常包含多个实体,而这些实体分散在文档的不同位置。以往的模型将这些实体独立使用,忽略了关系三元组之间的全局相互依赖关系。为了解决上述问题,我们提出了一个新颖的框架 EAAGRE(证据和轴向注意力引导的关系提取)。首先,我们使用人类标注的证据标签来监督 DocRE 系统的注意力模块,使模型关注证据句子而不是其他句子。其次,我们构建了一个实体级关系矩阵,并使用轴向关注来捕捉实体对之间的全局交互。这样,我们就能进一步提取需要多个实体对才能预测的关系。我们在 DocRED 上进行了各种实验,与基线模型相比有了一定的改进,验证了我们模型的有效性。
{"title":"Evidence and Axial Attention Guided Document-level Relation Extraction","authors":"Jiawei Yuan ,&nbsp;Hongyong Leng ,&nbsp;Yurong Qian ,&nbsp;Jiaying Chen ,&nbsp;Mengnan Ma ,&nbsp;Shuxiang Hou","doi":"10.1016/j.csl.2024.101728","DOIUrl":"10.1016/j.csl.2024.101728","url":null,"abstract":"<div><div>Document-level Relation Extraction (DocRE) aims to identify semantic relations among multiple entity pairs within a document. Most of the previous DocRE methods take the entire document as input. However, for human annotators, a small subset of sentences in the document, namely the evidence, is sufficient to infer the relation of an entity pair. Additionally, a document usually contains multiple entities, and these entities are scattered throughout various location of the document. Previous models use these entities independently, ignore the global interdependency among relation triples. To handle above issues, we propose a novel framework EAAGRE (Evidence and Axial Attention Guided Relation Extraction). Firstly, we use human-annotated evidence labels to supervise the attention module of DocRE system, making the model pay attention to the evidence sentences rather than others. Secondly, we construct an entity-level relation matrix and use axial attention to capture the global interactions among entity pairs. By doing so, we further extract the relations that require multiple entity pairs for prediction. We conduct various experiments on DocRED and have some improvement compared to baseline models, verifying the effectiveness of our model.</div></div>","PeriodicalId":50638,"journal":{"name":"Computer Speech and Language","volume":"90 ","pages":"Article 101728"},"PeriodicalIF":3.1,"publicationDate":"2024-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142533818","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Computer Speech and Language
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1