首页 > 最新文献

Proceedings of the 1st International Workshop on Multimodal Conversational AI最新文献

英文 中文
Automatic Speech Recognition and Natural Language Understanding for Emotion Detection in Multi-party Conversations 基于自动语音识别和自然语言理解的多人对话情感检测
Pub Date : 2020-10-16 DOI: 10.1145/3423325.3423737
Ilja Popovic, D. Culibrk, Milan Mirković, S. Vukmirović
Conversational emotion and sentiment analysis approaches rely on Natural Language Understanding (NLU) and audio processing components to achieve the goal of detecting emotions and sentiment based on what is being said. While there has been marked progress in pushing the state-of-the-art of theses methods on benchmark multimodal data sets, such as the Multimodal EmotionLines Dataset (MELD), the advances still seem to lag behind what has been achieved in the domain of mainstream Automatic Speech Recognition (ASR) and NLU applications and we were unable to identify any widely used products, services or production-ready systems that would enable the user to reliably detect emotions from audio recordings of multi-party conversations. Published, state-of-the-art scientific studies of multi-view emotion recognition seem to take it for granted that a human-generated or edited transcript is available as input to the NLU modules, providing no information of what happens in a realistic application scenario, where audio only is available and the NLU processing has to rely on text generated by ASR. Motivated by this insight, we present a study designed to evaluate the possibility of applying widely-used state-of-the-art commercial ASR products as the initial audio processing component in an emotion-from-speech detection system. We propose an approach which relies on commercially available products and services, such as Google Speech-to-Text, Mozilla DeepSpeech and the NVIDIA NeMo toolkit to process the audio and applies state-of-the-art NLU approaches for emotion recognition, in order to quickly create a robust, production-ready emotion-from-speech detection system applicable to multi-party conversations.
会话情感和情绪分析方法依赖于自然语言理解(NLU)和音频处理组件来实现基于所说内容检测情感和情绪的目标。虽然在基准多模态数据集(如多模态EmotionLines数据集(MELD))上推动这些方法的最新技术方面取得了显著进展,但这些进展似乎仍然落后于主流自动语音识别(ASR)和NLU应用领域取得的成就,而且我们无法识别任何广泛使用的产品。使用户能够可靠地从多方对话的录音中检测情绪的服务或生产就绪系统。已发表的关于多视角情感识别的最新科学研究似乎理所当然地认为,人工生成或编辑的文本可以作为NLU模块的输入,而没有提供在现实应用场景中发生的信息,在现实应用场景中,只有音频可用,NLU处理必须依赖于ASR生成的文本。受此启发,我们提出了一项研究,旨在评估将广泛使用的最先进的商业ASR产品作为语音情感检测系统中初始音频处理组件的可能性。我们提出了一种方法,该方法依赖于商业上可用的产品和服务,如谷歌Speech-to-Text、Mozilla DeepSpeech和NVIDIA NeMo工具包来处理音频,并应用最先进的NLU方法进行情感识别,以便快速创建一个适用于多人对话的强大的、生产就绪的语音情感检测系统。
{"title":"Automatic Speech Recognition and Natural Language Understanding for Emotion Detection in Multi-party Conversations","authors":"Ilja Popovic, D. Culibrk, Milan Mirković, S. Vukmirović","doi":"10.1145/3423325.3423737","DOIUrl":"https://doi.org/10.1145/3423325.3423737","url":null,"abstract":"Conversational emotion and sentiment analysis approaches rely on Natural Language Understanding (NLU) and audio processing components to achieve the goal of detecting emotions and sentiment based on what is being said. While there has been marked progress in pushing the state-of-the-art of theses methods on benchmark multimodal data sets, such as the Multimodal EmotionLines Dataset (MELD), the advances still seem to lag behind what has been achieved in the domain of mainstream Automatic Speech Recognition (ASR) and NLU applications and we were unable to identify any widely used products, services or production-ready systems that would enable the user to reliably detect emotions from audio recordings of multi-party conversations. Published, state-of-the-art scientific studies of multi-view emotion recognition seem to take it for granted that a human-generated or edited transcript is available as input to the NLU modules, providing no information of what happens in a realistic application scenario, where audio only is available and the NLU processing has to rely on text generated by ASR. Motivated by this insight, we present a study designed to evaluate the possibility of applying widely-used state-of-the-art commercial ASR products as the initial audio processing component in an emotion-from-speech detection system. We propose an approach which relies on commercially available products and services, such as Google Speech-to-Text, Mozilla DeepSpeech and the NVIDIA NeMo toolkit to process the audio and applies state-of-the-art NLU approaches for emotion recognition, in order to quickly create a robust, production-ready emotion-from-speech detection system applicable to multi-party conversations.","PeriodicalId":142947,"journal":{"name":"Proceedings of the 1st International Workshop on Multimodal Conversational AI","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114510811","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Augment Machine Intelligence with Multimodal Information 用多模态信息增强机器智能
Pub Date : 2020-10-16 DOI: 10.1145/3423325.3424123
Zhou Yu
Humans interact with other humans or the world through information from various channels including vision, audio, language, haptics, etc. To simulate intelligence, machines require similar abilities to process and combine information from different channels to acquire better situation awareness, better communication ability, and better decision-making ability. In this talk, we describe three projects. In the first study, we enable a robot to utilize both vision and audio information to achieve better user understanding [1]. Then we use incremental language generation to improve the robot's communication with a human. In the second study, we utilize multimodal history tracking to optimize policy planning in task-oriented visual dialogs. In the third project, we tackle the well-known trade-off between dialog response relevance and policy effectiveness in visual dialog generation. We propose a new machine learning procedure that alternates from supervised learning and reinforcement learning to optimum language generation and policy planning jointly in visual dialogs [2]. We will also cover some recent ongoing work on image synthesis through dialogs.
人类通过视觉、听觉、语言、触觉等各种渠道的信息与他人或世界互动。为了模拟智能,机器需要类似的能力来处理和组合来自不同渠道的信息,以获得更好的态势感知、更好的沟通能力和更好的决策能力。在这次演讲中,我们将介绍三个项目。在第一项研究中,我们使机器人能够同时利用视觉和音频信息来实现更好的用户理解[1]。然后,我们使用增量语言生成来改善机器人与人的沟通。在第二项研究中,我们利用多模态历史跟踪来优化面向任务的可视化对话中的策略规划。在第三个项目中,我们解决了在视觉对话生成中对话响应相关性和策略有效性之间众所周知的权衡。我们提出了一种新的机器学习过程,从监督学习和强化学习到视觉对话中的最佳语言生成和策略规划[2]。我们还将介绍一些最近正在进行的通过对话框进行图像合成的工作。
{"title":"Augment Machine Intelligence with Multimodal Information","authors":"Zhou Yu","doi":"10.1145/3423325.3424123","DOIUrl":"https://doi.org/10.1145/3423325.3424123","url":null,"abstract":"Humans interact with other humans or the world through information from various channels including vision, audio, language, haptics, etc. To simulate intelligence, machines require similar abilities to process and combine information from different channels to acquire better situation awareness, better communication ability, and better decision-making ability. In this talk, we describe three projects. In the first study, we enable a robot to utilize both vision and audio information to achieve better user understanding [1]. Then we use incremental language generation to improve the robot's communication with a human. In the second study, we utilize multimodal history tracking to optimize policy planning in task-oriented visual dialogs. In the third project, we tackle the well-known trade-off between dialog response relevance and policy effectiveness in visual dialog generation. We propose a new machine learning procedure that alternates from supervised learning and reinforcement learning to optimum language generation and policy planning jointly in visual dialogs [2]. We will also cover some recent ongoing work on image synthesis through dialogs.","PeriodicalId":142947,"journal":{"name":"Proceedings of the 1st International Workshop on Multimodal Conversational AI","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124786813","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
FUN-Agent: A 2020 HUMAINE Competition Entrant FUN-Agent: 2020年人类竞赛的参赛者
Pub Date : 2020-10-16 DOI: 10.1145/3423325.3423736
R. Geraghty, James Hale, S. Sen, Timothy S. Kroecker
Of late, there has been a significant surge of interest in industry and the general populace about future potential of human-AI collaboration [20]. Academic researchers have been pushing the frontier of new modalities of peer-level and ad-hoc human agent collaboration [10;22] for a longer period. We have been particularly interested in research on agents representing human users in negotiating deals with other human and autonomous agents [12;16;18]. Here we present the design for the conversational aspect of our agent entry into the HUMAINE League of the 2020 Automated Negotiation Agent Competition (ANAC). We discuss how our agent utilizes conversational and negotiation strategies, that mimic those used in human negotiations, to maximize its utility as a simulated street vendor. We leverage verbal influence tactics, offer pricing, and increasing human convenience to entice the buyer, build trust and discourage exploitation. Additionally, we discuss the results of some in-house testing we conducted.
最近,工业界和普通民众对人类与人工智能合作的未来潜力产生了极大的兴趣[20]。学术研究人员长期以来一直在推动对等水平和特设人类代理协作新模式的前沿[10;22]。我们对代表人类用户与其他人类和自主代理谈判交易的代理的研究特别感兴趣[12;16;18]。在这里,我们展示了我们的代理进入2020年自动谈判代理竞赛(ANAC)的人类联盟的对话方面的设计。我们讨论了我们的代理如何利用会话和谈判策略,模仿人类谈判中使用的策略,以最大限度地发挥其作为模拟街头小贩的效用。我们利用口头影响策略,提供价格,增加人力便利来吸引买家,建立信任并阻止剥削。此外,我们还讨论了我们进行的一些内部测试的结果。
{"title":"FUN-Agent: A 2020 HUMAINE Competition Entrant","authors":"R. Geraghty, James Hale, S. Sen, Timothy S. Kroecker","doi":"10.1145/3423325.3423736","DOIUrl":"https://doi.org/10.1145/3423325.3423736","url":null,"abstract":"Of late, there has been a significant surge of interest in industry and the general populace about future potential of human-AI collaboration [20]. Academic researchers have been pushing the frontier of new modalities of peer-level and ad-hoc human agent collaboration [10;22] for a longer period. We have been particularly interested in research on agents representing human users in negotiating deals with other human and autonomous agents [12;16;18]. Here we present the design for the conversational aspect of our agent entry into the HUMAINE League of the 2020 Automated Negotiation Agent Competition (ANAC). We discuss how our agent utilizes conversational and negotiation strategies, that mimic those used in human negotiations, to maximize its utility as a simulated street vendor. We leverage verbal influence tactics, offer pricing, and increasing human convenience to entice the buyer, build trust and discourage exploitation. Additionally, we discuss the results of some in-house testing we conducted.","PeriodicalId":142947,"journal":{"name":"Proceedings of the 1st International Workshop on Multimodal Conversational AI","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123225051","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Assisted Speech to Enable Second Language 辅助语音使第二语言成为可能
Pub Date : 2020-10-16 DOI: 10.1145/3423325.3423735
Mehmet Altinkaya, A. Smeulders
Speaking a second language (L2) is a desired capability for billionsof people. Currently, the only way to achieve it naturally is througha lengthy and tedious training, which ends up various stages offluency. The process is far away from the natural acquisition of alanguage.In this paper, we propose a system that enables any person withsome basic understanding of L2 speak fluently through "Instant As-sistance" provided by digital conversational agents such as GoogleAssistant, Microsoft Cortana, or Apple Siri, which monitors thespeaker. It attends to provide assistance to continue to speak whenspeech is interrupted as it is not yet completely mastered. The notyet acquired elements of language can be missing words, unfa-miliarity with expressions, the implicit rules of articles, and thehabits of sayings. We can employ the hardware and software of theassistants to create an immersive, adaptive learning environmentto train the speaker online by a symbiotic interaction for implicit,unnoticeable correction.
说第二语言(L2)是数十亿人渴望拥有的能力。目前,唯一的方法是通过漫长而乏味的训练,最终达到不同的流利程度。这个过程与语言的自然习得相去甚远。在本文中,我们提出了一个系统,使任何对第二语言有一些基本了解的人都能通过数字会话代理(如GoogleAssistant、Microsoft Cortana或Apple Siri)提供的“即时辅助”(Instant as - assistance)流利地说话。当语言还没有完全掌握而被打断时,它会提供帮助继续说话。尚未习得的语言要素可能是缺词,不熟悉表达,冠词的隐含规则,以及说话的习惯。我们可以使用助手的硬件和软件来创建一个沉浸式的、自适应的学习环境,通过共生互动来在线训练说话者,以实现隐性的、不明显的纠正。
{"title":"Assisted Speech to Enable Second Language","authors":"Mehmet Altinkaya, A. Smeulders","doi":"10.1145/3423325.3423735","DOIUrl":"https://doi.org/10.1145/3423325.3423735","url":null,"abstract":"Speaking a second language (L2) is a desired capability for billionsof people. Currently, the only way to achieve it naturally is througha lengthy and tedious training, which ends up various stages offluency. The process is far away from the natural acquisition of alanguage.In this paper, we propose a system that enables any person withsome basic understanding of L2 speak fluently through \"Instant As-sistance\" provided by digital conversational agents such as GoogleAssistant, Microsoft Cortana, or Apple Siri, which monitors thespeaker. It attends to provide assistance to continue to speak whenspeech is interrupted as it is not yet completely mastered. The notyet acquired elements of language can be missing words, unfa-miliarity with expressions, the implicit rules of articles, and thehabits of sayings. We can employ the hardware and software of theassistants to create an immersive, adaptive learning environmentto train the speaker online by a symbiotic interaction for implicit,unnoticeable correction.","PeriodicalId":142947,"journal":{"name":"Proceedings of the 1st International Workshop on Multimodal Conversational AI","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125822880","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Motivation and Design of the Conversational Components of DraftAgent for Human-Agent Negotiation 面向人代理协商的DraftAgent会话组件的动机与设计
Pub Date : 2020-10-16 DOI: 10.1145/3423325.3423734
Dale Peasley, Michael Naguib, Bohan Xu, S. Sen, Timothy S. Kroecker
In sync with the significant interest in industry and the general populace about future potential of human-AI collaboration [14], academic researchers have been pushing the frontier of new modalities of peer-level and ad-hoc human agent collaboration [4,15]. We have been particularly interested in research on agents representing human users in negotiating deals with other human and autonomous agents [6,11,13]. We present the design motivation and key components of the conversational aspect of our agent entry into the Human-Agent League(HAL) (http://web.tuat.ac.jp/~katfuji/ANAC2020/cfp/ham_cfp.pdf )of the 2020 Automated Negotiation Agent Competition (ANAC). We explore how language can be used to promote human-agent collaboration even in the domain of a competitive negotiation. We present small scale in-lab testing to demonstrate the potential of our approach.
随着工业界和普通民众对人类-人工智能协作的未来潜力的浓厚兴趣[14],学术研究人员一直在推动对等水平和特设人类代理协作新模式的前沿[4,15]。我们对代表人类用户与其他人类和自主代理谈判交易的代理的研究特别感兴趣[6,11,13]。我们展示了我们的代理进入2020年自动谈判代理竞赛(ANAC)的人类代理联盟(HAL) (http://web.tuat.ac.jp/~katfuji/ANAC2020/cfp/ham_cfp.pdf)的对话方面的设计动机和关键组件。我们探索如何使用语言来促进人类代理协作,甚至在竞争性谈判领域。我们提出了小规模的实验室测试来证明我们的方法的潜力。
{"title":"Motivation and Design of the Conversational Components of DraftAgent for Human-Agent Negotiation","authors":"Dale Peasley, Michael Naguib, Bohan Xu, S. Sen, Timothy S. Kroecker","doi":"10.1145/3423325.3423734","DOIUrl":"https://doi.org/10.1145/3423325.3423734","url":null,"abstract":"In sync with the significant interest in industry and the general populace about future potential of human-AI collaboration [14], academic researchers have been pushing the frontier of new modalities of peer-level and ad-hoc human agent collaboration [4,15]. We have been particularly interested in research on agents representing human users in negotiating deals with other human and autonomous agents [6,11,13]. We present the design motivation and key components of the conversational aspect of our agent entry into the Human-Agent League(HAL) (http://web.tuat.ac.jp/~katfuji/ANAC2020/cfp/ham_cfp.pdf )of the 2020 Automated Negotiation Agent Competition (ANAC). We explore how language can be used to promote human-agent collaboration even in the domain of a competitive negotiation. We present small scale in-lab testing to demonstrate the potential of our approach.","PeriodicalId":142947,"journal":{"name":"Proceedings of the 1st International Workshop on Multimodal Conversational AI","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127690656","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
A Dynamic, Self Supervised, Large Scale AudioVisual Dataset for Stuttered Speech 一个动态的、自监督的、大规模的口吃语音视听数据集
Pub Date : 2020-10-16 DOI: 10.1145/3423325.3423733
Mehmet Altinkaya, A. Smeulders
Stuttering affects at least 1% of the world population. It is caused by irregular disruptions in speech production. These interruptions occur in various forms and frequencies. Repetition of words or parts of words, prolongations, or blocks in getting the words out are the most common ones. Accurate detection and classification of stuttering would be important in the assessment of severity for speech therapy. Furthermore, real time detection might create many new possibilities to facilitate reconstruction into fluent speech. Such an interface could help people to utilize voice-based interfaces like Apple Siri and Google Assistant, or to make (video) phone calls more fluent by delayed delivery. In this paper we present the first expandable audio-visual database of stuttered speech. We explore an end-to-end, real-time, multi-modal model for detection and classification of stuttered blocks in unbound speech. We also make use of video signals since acoustic signals cannot be produced immediately. We use multiple modalities as acoustic signals together with secondary characteristics exhibited in visual signals will permit an increased accuracy of detection.
口吃影响着世界上至少1%的人口。它是由语音产生的不规则中断引起的。这些中断以各种形式和频率发生。最常见的是重复单词或单词的一部分,延长或发音障碍。准确的检测和分类口吃对于评估言语治疗的严重程度是很重要的。此外,实时检测可能会创造许多新的可能性,以促进重建为流利的语音。这样的界面可以帮助人们利用苹果Siri和b谷歌Assistant等基于语音的界面,或者通过延迟传输使(视频)电话通话更流畅。在本文中,我们提出了第一个可扩展的口吃语音视听数据库。我们探索了一个端到端、实时、多模态的模型,用于检测和分类非绑定语音中的口吃块。由于声音信号不能立即产生,我们也利用了视频信号。我们使用多种模态作为声信号以及视觉信号中显示的次要特征,这将提高检测的准确性。
{"title":"A Dynamic, Self Supervised, Large Scale AudioVisual Dataset for Stuttered Speech","authors":"Mehmet Altinkaya, A. Smeulders","doi":"10.1145/3423325.3423733","DOIUrl":"https://doi.org/10.1145/3423325.3423733","url":null,"abstract":"Stuttering affects at least 1% of the world population. It is caused by irregular disruptions in speech production. These interruptions occur in various forms and frequencies. Repetition of words or parts of words, prolongations, or blocks in getting the words out are the most common ones. Accurate detection and classification of stuttering would be important in the assessment of severity for speech therapy. Furthermore, real time detection might create many new possibilities to facilitate reconstruction into fluent speech. Such an interface could help people to utilize voice-based interfaces like Apple Siri and Google Assistant, or to make (video) phone calls more fluent by delayed delivery. In this paper we present the first expandable audio-visual database of stuttered speech. We explore an end-to-end, real-time, multi-modal model for detection and classification of stuttered blocks in unbound speech. We also make use of video signals since acoustic signals cannot be produced immediately. We use multiple modalities as acoustic signals together with secondary characteristics exhibited in visual signals will permit an increased accuracy of detection.","PeriodicalId":142947,"journal":{"name":"Proceedings of the 1st International Workshop on Multimodal Conversational AI","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131247702","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Proceedings of the 1st International Workshop on Multimodal Conversational AI 第一届多模态会话人工智能国际研讨会论文集
{"title":"Proceedings of the 1st International Workshop on Multimodal Conversational AI","authors":"","doi":"10.1145/3423325","DOIUrl":"https://doi.org/10.1145/3423325","url":null,"abstract":"","PeriodicalId":142947,"journal":{"name":"Proceedings of the 1st International Workshop on Multimodal Conversational AI","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124322834","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Proceedings of the 1st International Workshop on Multimodal Conversational AI
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1