首页 > 最新文献

Proceedings of the Massively Multilingual Natural Language Understanding Workshop (MMNLU-22)最新文献

英文 中文
Fine-grained Multi-lingual Disentangled Autoencoder for Language-agnostic Representation Learning 用于语言不可知表示学习的细粒度多语言解纠缠自编码器
Zetian Wu, Zhongkai Sun, Zhengyang Zhao, Sixing Lu, Chengyuan Ma, Chenlei Guo
Encoding both language-specific and language-agnostic information into a single high-dimensional space is a common practice of pre-trained Multi-lingual Language Models (pMLM). Such encoding has been shown to perform effectively on natural language tasks requiring semantics of the whole sentence (e.g., translation). However, its effectiveness appears to be limited on tasks requiring partial information of the utterance (e.g., multi-lingual entity retrieval, template retrieval, and semantic alignment). In this work, a novel Fine-grained Multilingual Disentangled Autoencoder (FMDA) is proposed to disentangle fine-grained semantic information from language-specific information in a multi-lingual setting. FMDA is capable of successfully extracting the disentangled template semantic and residual semantic representations. Experiments conducted on the MASSIVE dataset demonstrate that the disentangled encoding can boost each other during the training, thus consistently outperforming the original pMLM and the strong language disentanglement baseline on monolingual template retrieval and cross-lingual semantic retrieval tasks across multiple languages.
将特定于语言和与语言无关的信息编码到单个高维空间中是预训练的多语言语言模型(pMLM)的常见做法。这种编码已经被证明可以有效地执行需要整个句子语义的自然语言任务(例如,翻译)。然而,它的有效性似乎仅限于需要部分话语信息的任务(例如,多语言实体检索,模板检索和语义对齐)。在这项工作中,提出了一种新的细粒度多语言解纠缠自编码器(FMDA),用于在多语言设置中从特定语言信息中解纠缠细粒度语义信息。FMDA能够成功地提取解纠缠模板语义和残馀语义表示。在MASSIVE数据集上进行的实验表明,在训练过程中,解纠缠编码可以相互促进,从而在跨多语言的单语言模板检索和跨语言语义检索任务上始终优于原始pMLM和强语言解纠缠基线。
{"title":"Fine-grained Multi-lingual Disentangled Autoencoder for Language-agnostic Representation Learning","authors":"Zetian Wu, Zhongkai Sun, Zhengyang Zhao, Sixing Lu, Chengyuan Ma, Chenlei Guo","doi":"10.18653/v1/2022.mmnlu-1.2","DOIUrl":"https://doi.org/10.18653/v1/2022.mmnlu-1.2","url":null,"abstract":"Encoding both language-specific and language-agnostic information into a single high-dimensional space is a common practice of pre-trained Multi-lingual Language Models (pMLM). Such encoding has been shown to perform effectively on natural language tasks requiring semantics of the whole sentence (e.g., translation). However, its effectiveness appears to be limited on tasks requiring partial information of the utterance (e.g., multi-lingual entity retrieval, template retrieval, and semantic alignment). In this work, a novel Fine-grained Multilingual Disentangled Autoencoder (FMDA) is proposed to disentangle fine-grained semantic information from language-specific information in a multi-lingual setting. FMDA is capable of successfully extracting the disentangled template semantic and residual semantic representations. Experiments conducted on the MASSIVE dataset demonstrate that the disentangled encoding can boost each other during the training, thus consistently outperforming the original pMLM and the strong language disentanglement baseline on monolingual template retrieval and cross-lingual semantic retrieval tasks across multiple languages.","PeriodicalId":375461,"journal":{"name":"Proceedings of the Massively Multilingual Natural Language Understanding Workshop (MMNLU-22)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114319340","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
C5L7: A Zero-Shot Algorithm for Intent and Slot Detection in Multilingual Task Oriented Languages 面向多语言任务的语言意图和插槽检测的零射击算法
Jiun-hao Jhan, Qingxiaoyang Zhu, Nehal Bengre, T. Kanungo
Voice assistants are becoming central to our lives. The convenience of using voice assistants to do simple tasks has created an industry for voice-enabled devices like TVs, thermostats, air conditioners, etc. It has also improved the quality of life of elders by making the world more accessible. Voice assistants engage in task-oriented dialogues using machine-learned language understanding models. However, training deep-learned models take a lot of training data, which is time-consuming and expensive. Furthermore, it is even more problematic if we want the voice assistant to understand hundreds of languages. In this paper, we present a zero-shot deep learning algorithm that uses only the English part of the Massive dataset and achieves a high level of accuracy across 51 languages. The algorithm uses a delexicalized translation model to generate multilingual data for data augmentation. The training data is further weighted to improve the accuracy of the worst-performing languages. We report on our experiments with code-switching, word order, multilingual ensemble methods, and other techniques and their impact on overall accuracy.
语音助手正在成为我们生活的核心。使用语音助手完成简单任务的便利性为电视、恒温器、空调等语音设备创造了一个行业。它还通过使世界更容易接近,提高了老年人的生活质量。语音助手使用机器学习语言理解模型进行面向任务的对话。然而,训练深度学习模型需要大量的训练数据,这既耗时又昂贵。此外,如果我们想让语音助手理解数百种语言,问题就更大了。在本文中,我们提出了一种零采样深度学习算法,该算法仅使用大规模数据集的英语部分,并在51种语言中实现了高水平的准确性。该算法采用去语义化的翻译模型生成多语种数据,用于数据扩充。训练数据进一步加权,以提高表现最差的语言的准确性。我们报告了代码转换、词序、多语言集成方法和其他技术的实验及其对整体准确性的影响。
{"title":"C5L7: A Zero-Shot Algorithm for Intent and Slot Detection in Multilingual Task Oriented Languages","authors":"Jiun-hao Jhan, Qingxiaoyang Zhu, Nehal Bengre, T. Kanungo","doi":"10.18653/v1/2022.mmnlu-1.7","DOIUrl":"https://doi.org/10.18653/v1/2022.mmnlu-1.7","url":null,"abstract":"Voice assistants are becoming central to our lives. The convenience of using voice assistants to do simple tasks has created an industry for voice-enabled devices like TVs, thermostats, air conditioners, etc. It has also improved the quality of life of elders by making the world more accessible. Voice assistants engage in task-oriented dialogues using machine-learned language understanding models. However, training deep-learned models take a lot of training data, which is time-consuming and expensive. Furthermore, it is even more problematic if we want the voice assistant to understand hundreds of languages. In this paper, we present a zero-shot deep learning algorithm that uses only the English part of the Massive dataset and achieves a high level of accuracy across 51 languages. The algorithm uses a delexicalized translation model to generate multilingual data for data augmentation. The training data is further weighted to improve the accuracy of the worst-performing languages. We report on our experiments with code-switching, word order, multilingual ensemble methods, and other techniques and their impact on overall accuracy.","PeriodicalId":375461,"journal":{"name":"Proceedings of the Massively Multilingual Natural Language Understanding Workshop (MMNLU-22)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129268395","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Machine Translation for Multilingual Intent Detection and Slots Filling 多语言意图检测与槽位填充的机器翻译
Maxime De Bruyn, Ehsan Lotfi, Jeska Buhmann, Walter Daelemans
We expect to interact with home assistants irrespective of our language. However, scaling the Natural Language Understanding pipeline to multiple languages while keeping the same level of accuracy remains a challenge. In this work, we leverage the inherent multilingual aspect of translation models for the task of multilingual intent classification and slot filling. Our experiments reveal that they work equally well with general-purpose multilingual text-to-text models. Furthermore, their accuracy can be further improved by artificially increasing the size of the training set. Unfortunately, increasing the training set also increases the overlap with the test set, leading to overestimating their true capabilities. As a result, we propose two new evaluation methods capable of accounting for an overlap between the training and test set.
我们期望与家庭助理互动,无论我们的语言如何。然而,将自然语言理解管道扩展到多种语言,同时保持相同的准确性仍然是一个挑战。在这项工作中,我们利用翻译模型固有的多语言方面来完成多语言意图分类和插槽填充任务。我们的实验表明,它们同样适用于通用的多语言文本到文本模型。此外,它们的准确性可以通过人为地增加训练集的大小来进一步提高。不幸的是,增加训练集也增加了与测试集的重叠,导致高估了它们的真实能力。因此,我们提出了两种新的评估方法,能够考虑训练集和测试集之间的重叠。
{"title":"Machine Translation for Multilingual Intent Detection and Slots Filling","authors":"Maxime De Bruyn, Ehsan Lotfi, Jeska Buhmann, Walter Daelemans","doi":"10.18653/v1/2022.mmnlu-1.8","DOIUrl":"https://doi.org/10.18653/v1/2022.mmnlu-1.8","url":null,"abstract":"We expect to interact with home assistants irrespective of our language. However, scaling the Natural Language Understanding pipeline to multiple languages while keeping the same level of accuracy remains a challenge. In this work, we leverage the inherent multilingual aspect of translation models for the task of multilingual intent classification and slot filling. Our experiments reveal that they work equally well with general-purpose multilingual text-to-text models. Furthermore, their accuracy can be further improved by artificially increasing the size of the training set. Unfortunately, increasing the training set also increases the overlap with the test set, leading to overestimating their true capabilities. As a result, we propose two new evaluation methods capable of accounting for an overlap between the training and test set.","PeriodicalId":375461,"journal":{"name":"Proceedings of the Massively Multilingual Natural Language Understanding Workshop (MMNLU-22)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115917661","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Byte-Level Massively Multilingual Semantic Parsing 字节级大规模多语言语义解析
M. Nicosia, Francesco Piccinno
Token free approaches have been successfully applied to a series of word and span level tasks. In this work, we evaluate a byte-level sequence to sequence model (ByT5) on the 51 languages in the MASSIVE multilingual semantic parsing dataset. We examine multiple experimental settings: (i) zero-shot, (ii) full gold data and (iii) zero-shot with synthetic data. By leveraging a state-of-the-art label projection method for machine translated examples, we are able to reduce the gap in exact match to only 5 points with respect to a model trained on gold data from all the languages. We additionally provide insights on the cross-lingual transfer of ByT5 and show how the model compares with respect to mT5 across all parameter sizes.
无令牌方法已成功应用于一系列单词和跨级任务。在这项工作中,我们在MASSIVE多语言语义分析数据集中的51种语言上评估了字节级序列到序列模型(ByT5)。我们检查了多个实验设置:(i)零射击,(ii)全金数据和(iii)合成数据的零射击。通过利用机器翻译示例的最先进的标签投影方法,我们能够将精确匹配的差距减少到仅5分,相对于来自所有语言的黄金数据训练的模型。我们还提供了关于ByT5跨语言迁移的见解,并展示了该模型如何在所有参数大小上与mT5进行比较。
{"title":"Byte-Level Massively Multilingual Semantic Parsing","authors":"M. Nicosia, Francesco Piccinno","doi":"10.18653/v1/2022.mmnlu-1.3","DOIUrl":"https://doi.org/10.18653/v1/2022.mmnlu-1.3","url":null,"abstract":"Token free approaches have been successfully applied to a series of word and span level tasks. In this work, we evaluate a byte-level sequence to sequence model (ByT5) on the 51 languages in the MASSIVE multilingual semantic parsing dataset. We examine multiple experimental settings: (i) zero-shot, (ii) full gold data and (iii) zero-shot with synthetic data. By leveraging a state-of-the-art label projection method for machine translated examples, we are able to reduce the gap in exact match to only 5 points with respect to a model trained on gold data from all the languages. We additionally provide insights on the cross-lingual transfer of ByT5 and show how the model compares with respect to mT5 across all parameter sizes.","PeriodicalId":375461,"journal":{"name":"Proceedings of the Massively Multilingual Natural Language Understanding Workshop (MMNLU-22)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133931342","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Zero-Shot Cross-Lingual Sequence Tagging as Seq2Seq Generation for Joint Intent Classification and Slot Filling 基于Seq2Seq生成的零射跨语言序列标注联合意图分类和槽填充
Fei Wang, Kuan-Hao Huang, Anoop Kumar, A. Galstyan, Greg Ver Steeg, Kai-Wei Chang
The joint intent classification and slot filling task seeks to detect the intent of an utterance and extract its semantic concepts. In the zero-shot cross-lingual setting, a model is trained on a source language and then transferred to other target languages through multi-lingual representations without additional training data. While prior studies show that pre-trained multilingual sequence-to-sequence (Seq2Seq) models can facilitate zero-shot transfer, there is little understanding on how to design the output template for the joint prediction tasks. In this paper, we examine three aspects of the output template – (1) label mapping, (2) task dependency, and (3) word order. Experiments on the MASSIVE dataset consisting of 51 languages show that our output template significantly improves the performance of pre-trained cross-lingual language models.
联合意图分类和槽填充任务旨在检测话语的意图并提取其语义概念。在零射击跨语言设置中,模型在源语言上进行训练,然后通过多语言表示转移到其他目标语言,而不需要额外的训练数据。虽然先前的研究表明,预先训练的多语言序列到序列(Seq2Seq)模型可以促进零次迁移,但对于如何设计联合预测任务的输出模板却知之甚少。在本文中,我们研究了输出模板的三个方面-(1)标签映射,(2)任务依赖和(3)词序。在包含51种语言的MASSIVE数据集上的实验表明,我们的输出模板显著提高了预训练的跨语言语言模型的性能。
{"title":"Zero-Shot Cross-Lingual Sequence Tagging as Seq2Seq Generation for Joint Intent Classification and Slot Filling","authors":"Fei Wang, Kuan-Hao Huang, Anoop Kumar, A. Galstyan, Greg Ver Steeg, Kai-Wei Chang","doi":"10.18653/v1/2022.mmnlu-1.6","DOIUrl":"https://doi.org/10.18653/v1/2022.mmnlu-1.6","url":null,"abstract":"The joint intent classification and slot filling task seeks to detect the intent of an utterance and extract its semantic concepts. In the zero-shot cross-lingual setting, a model is trained on a source language and then transferred to other target languages through multi-lingual representations without additional training data. While prior studies show that pre-trained multilingual sequence-to-sequence (Seq2Seq) models can facilitate zero-shot transfer, there is little understanding on how to design the output template for the joint prediction tasks. In this paper, we examine three aspects of the output template – (1) label mapping, (2) task dependency, and (3) word order. Experiments on the MASSIVE dataset consisting of 51 languages show that our output template significantly improves the performance of pre-trained cross-lingual language models.","PeriodicalId":375461,"journal":{"name":"Proceedings of the Massively Multilingual Natural Language Understanding Workshop (MMNLU-22)","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128655284","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Play música alegre: A Large-Scale Empirical Analysis of Cross-Lingual Phenomena in Voice Assistant Interactions Play música alegre:语音助手交互中跨语言现象的大规模实证分析
Donato Crisostomi, Davide Bernardi, Sarah Campbell
Cross-lingual phenomena are quite common in informal contexts like social media, where users are likely to mix their native language with English or other languages. However, few studies have focused so far on analyzing cross-lingual interactions in voice-assistant data, which present peculiar features in terms of sentence length, named entities, and use of spoken language. Also, little attention has been posed to European countries, where English is frequently used as a second language. In this paper, we present a large-scale empirical analysis of cross-lingual phenomena (code-mixing, linguistic borrowing, foreign named entities) in the interactions with a large-scale voice assistant in European countries. To do this, we first introduce a general, highly-scalable technique to generate synthetic mixed training data annotated with token-level language labels and we train two neural network models to predict them. We evaluate the models both on the synthetic dataset and on a real dataset of code-switched utterances, showing that the best performance is obtained by a character convolution based model. The results of the analysis highlight different behaviors between countries, having Italy with the highest ratio of cross-lingual utterances and Spain with a marked preference in keeping Spanish words. Our research, paired to the increase of the cross-lingual phenomena in time, motivates further research in developing multilingual Natural Language Understanding (NLU) models, which can naturally deal with cross-lingual interactions.
在社交媒体等非正式环境中,跨语言现象非常普遍,用户可能会将母语与英语或其他语言混合使用。然而,到目前为止,很少有研究关注于分析语音助理数据中的跨语言交互,这些数据在句子长度、命名实体和口语使用方面呈现出独特的特征。此外,很少有人注意到欧洲国家,在那里英语经常被用作第二语言。在本文中,我们对与欧洲国家的大型语音助手互动中的跨语言现象(代码混合,语言借用,外国命名实体)进行了大规模的实证分析。为此,我们首先引入了一种通用的、高度可扩展的技术来生成带有标记级语言标签的合成混合训练数据,并训练两个神经网络模型来预测它们。我们在合成数据集和编码切换话语的真实数据集上对模型进行了评估,结果表明基于字符卷积的模型获得了最好的性能。分析结果强调了不同国家之间的不同行为,意大利的跨语言话语比例最高,而西班牙则明显倾向于保留西班牙语词汇。随着跨语言现象的增加,我们的研究将进一步推动多语言自然语言理解(NLU)模型的发展,该模型可以自然地处理跨语言交互。
{"title":"Play música alegre: A Large-Scale Empirical Analysis of Cross-Lingual Phenomena in Voice Assistant Interactions","authors":"Donato Crisostomi, Davide Bernardi, Sarah Campbell","doi":"10.18653/v1/2022.mmnlu-1.5","DOIUrl":"https://doi.org/10.18653/v1/2022.mmnlu-1.5","url":null,"abstract":"Cross-lingual phenomena are quite common in informal contexts like social media, where users are likely to mix their native language with English or other languages. However, few studies have focused so far on analyzing cross-lingual interactions in voice-assistant data, which present peculiar features in terms of sentence length, named entities, and use of spoken language. Also, little attention has been posed to European countries, where English is frequently used as a second language. In this paper, we present a large-scale empirical analysis of cross-lingual phenomena (code-mixing, linguistic borrowing, foreign named entities) in the interactions with a large-scale voice assistant in European countries. To do this, we first introduce a general, highly-scalable technique to generate synthetic mixed training data annotated with token-level language labels and we train two neural network models to predict them. We evaluate the models both on the synthetic dataset and on a real dataset of code-switched utterances, showing that the best performance is obtained by a character convolution based model. The results of the analysis highlight different behaviors between countries, having Italy with the highest ratio of cross-lingual utterances and Spain with a marked preference in keeping Spanish words. Our research, paired to the increase of the cross-lingual phenomena in time, motivates further research in developing multilingual Natural Language Understanding (NLU) models, which can naturally deal with cross-lingual interactions.","PeriodicalId":375461,"journal":{"name":"Proceedings of the Massively Multilingual Natural Language Understanding Workshop (MMNLU-22)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128514159","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Proceedings of the Massively Multilingual Natural Language Understanding Workshop (MMNLU-22)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1