首页 > 最新文献

2019 22nd Conference of the Oriental COCOSDA International Committee for the Co-ordination and Standardisation of Speech Databases and Assessment Techniques (O-COCOSDA)最新文献

英文 中文
The Architecture of Speech-to-Speech Translator for Mobile Conversation 面向移动会话的语音到语音翻译器体系结构
Agung Santosa, Andi Djalal Latief, Hammam Riza, Asril Jarin, Lyla Ruslana Aini, Gunarso, Gita Citra Puspita, M. T. Uliniansyah, Elvira Nurfadhilah, Harnum A. Prafitia, Made Gunawan
With competencies and the results of the engineering of natural language processing technology owned by BPPT since 1987, BPPT develops an English-Bahasa Indonesia speech-to-speech translation system (S2ST). In this paper, we propose an architecture of speech-to-speech translation system for Android-based mobile conversation using separate mobile devices for each language. This architecture applies three leading technologies, namely: WebSocket, REST, and JSON. The system utilizes a two-way communication protocol between two users and a simple voice activation detector that can detect a boundary of user's utterance.
凭借BPPT自1987年以来拥有的自然语言处理技术的工程能力和成果,BPPT开发了英语-印尼语语音翻译系统(S2ST)。在本文中,我们提出了一种基于android的移动会话的语音到语音翻译系统架构,每种语言使用单独的移动设备。该架构采用了三种领先的技术,即:WebSocket、REST和JSON。该系统利用两个用户之间的双向通信协议和一个简单的语音激活检测器,该检测器可以检测用户的话语边界。
{"title":"The Architecture of Speech-to-Speech Translator for Mobile Conversation","authors":"Agung Santosa, Andi Djalal Latief, Hammam Riza, Asril Jarin, Lyla Ruslana Aini, Gunarso, Gita Citra Puspita, M. T. Uliniansyah, Elvira Nurfadhilah, Harnum A. Prafitia, Made Gunawan","doi":"10.1109/O-COCOSDA46868.2019.9041196","DOIUrl":"https://doi.org/10.1109/O-COCOSDA46868.2019.9041196","url":null,"abstract":"With competencies and the results of the engineering of natural language processing technology owned by BPPT since 1987, BPPT develops an English-Bahasa Indonesia speech-to-speech translation system (S2ST). In this paper, we propose an architecture of speech-to-speech translation system for Android-based mobile conversation using separate mobile devices for each language. This architecture applies three leading technologies, namely: WebSocket, REST, and JSON. The system utilizes a two-way communication protocol between two users and a simple voice activation detector that can detect a boundary of user's utterance.","PeriodicalId":263209,"journal":{"name":"2019 22nd Conference of the Oriental COCOSDA International Committee for the Co-ordination and Standardisation of Speech Databases and Assessment Techniques (O-COCOSDA)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116075403","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
The Study of Prosody-Pragmatics Interface with Focus Functioning as Pragmatic Markers: The Case of Question and Statement 焦点作为语用标记的韵律语用界面研究——以疑问句和陈述句为例
Siyi Cao, Yizhong Xu, Xiaoli Ji
This paper investigated that based on [22] ‘s perspective that Pragmatic Markers (PMs) are realized mainly through prosody between native speakers and non-native speakers, when focus functions as pragmatic markers, whether pragmatic factors from non-native speakers restrict the realization of Pragmatic Markers through prosody leading to misunderstanding in intercultural communication, in the case of declarative questions and statements. Pitch contours of 17 Chinese EFL (English as a foreign language) learners (non-native speakers)’ sentences were compared with that of six native speakers using four sentences from AESOP. The results demonstrated that native speakers and non-native speakers indeed realized pragmatic markers (focused words) through prosodic cues (pitch range), but differed in the way of realization for pragmatic markers, leading to pragmatic misunderstanding. This paper proves [22] ‘s opinion and demonstrates that pragmatic elements from transfer, L2 teaching, proficiency of non-native speakers constraint prosodic ways for realizing pragmatic markers, which indicates conventionality in cross-culture conversation.
本文基于[22]的观点,即语用标记在本族语和非本族语之间主要通过韵律实现,当焦点作为语用标记时,来自非本族语的语用因素是否会限制语用标记通过韵律实现,从而导致跨文化交际中的误解,在陈述句和疑问句中。本文比较了17名中国非母语英语学习者(英语为外语)和6名母语学习者使用AESOP中的4个句子的音高轮廓。结果表明,母语人士和非母语人士确实通过韵律线索(音域)实现语用标记(焦点词),但对语用标记的实现方式不同,导致语用误解。本文证明了[22]的观点,并论证了来自迁移、二语教学、非本族语使用者熟练程度的语用因素制约了语用标记的韵律实现方式,这表明跨文化会话中的约定俗成。
{"title":"The Study of Prosody-Pragmatics Interface with Focus Functioning as Pragmatic Markers: The Case of Question and Statement","authors":"Siyi Cao, Yizhong Xu, Xiaoli Ji","doi":"10.1109/O-COCOSDA46868.2019.9041157","DOIUrl":"https://doi.org/10.1109/O-COCOSDA46868.2019.9041157","url":null,"abstract":"This paper investigated that based on [22] ‘s perspective that Pragmatic Markers (PMs) are realized mainly through prosody between native speakers and non-native speakers, when focus functions as pragmatic markers, whether pragmatic factors from non-native speakers restrict the realization of Pragmatic Markers through prosody leading to misunderstanding in intercultural communication, in the case of declarative questions and statements. Pitch contours of 17 Chinese EFL (English as a foreign language) learners (non-native speakers)’ sentences were compared with that of six native speakers using four sentences from AESOP. The results demonstrated that native speakers and non-native speakers indeed realized pragmatic markers (focused words) through prosodic cues (pitch range), but differed in the way of realization for pragmatic markers, leading to pragmatic misunderstanding. This paper proves [22] ‘s opinion and demonstrates that pragmatic elements from transfer, L2 teaching, proficiency of non-native speakers constraint prosodic ways for realizing pragmatic markers, which indicates conventionality in cross-culture conversation.","PeriodicalId":263209,"journal":{"name":"2019 22nd Conference of the Oriental COCOSDA International Committee for the Co-ordination and Standardisation of Speech Databases and Assessment Techniques (O-COCOSDA)","volume":"96 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132616409","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Comparison between read and spontaneous speech assessment of L2 Korean 二语韩语阅读与自发言语评价的比较
S. Yang, Minhwa Chung
This paper describes two experiments aimed at exploring the relationship between linguistic aspects and perceived proficiency in read and spontaneous speech. 5,000 utterances of read speech by 50 non-native speakers of Korean in Experiment 1, and of 6,000 spontaneous speech utterances in Experiment 2 were scored for proficiency by native human raters and were analyzed by factors known to be related to perceived proficiency. The results show that the factors investigated in this study can be employed to predict proficiency ratings, and the predictive power of fluency and pitch and accent accuracy is strong for both read and spontaneous speech. We also observe that while proficiency ratings of read speech are mainly related to segmental accuracy, those of spontaneous speech appear to be more related to pitch and accent accuracy. Moreover, proficiency in read speech does not always equate to the proficiency in spontaneous speech, and vice versa, with Pearson’s per-speaker correlation score of 0.535.
本文描述了两个旨在探索语言方面与阅读和自发语言感知熟练程度之间关系的实验。实验1中50名非韩语母语者的5000个阅读话语和实验2中6000个自发话语由母语评分者评分,并通过已知的与感知能力相关的因素进行分析。结果表明,本研究的因素可以用来预测熟练程度评分,并且流利度、音高和口音准确性对阅读和自发语音都有很强的预测能力。我们还观察到,虽然阅读语音的熟练程度评级主要与片段准确性有关,但自发语音的熟练程度评级似乎与音高和口音准确性更相关。此外,熟练掌握阅读语音并不总是等同于熟练掌握自发语音,反之亦然,Pearson的每说话人相关分数为0.535。
{"title":"Comparison between read and spontaneous speech assessment of L2 Korean","authors":"S. Yang, Minhwa Chung","doi":"10.1109/O-COCOSDA46868.2019.9060846","DOIUrl":"https://doi.org/10.1109/O-COCOSDA46868.2019.9060846","url":null,"abstract":"This paper describes two experiments aimed at exploring the relationship between linguistic aspects and perceived proficiency in read and spontaneous speech. 5,000 utterances of read speech by 50 non-native speakers of Korean in Experiment 1, and of 6,000 spontaneous speech utterances in Experiment 2 were scored for proficiency by native human raters and were analyzed by factors known to be related to perceived proficiency. The results show that the factors investigated in this study can be employed to predict proficiency ratings, and the predictive power of fluency and pitch and accent accuracy is strong for both read and spontaneous speech. We also observe that while proficiency ratings of read speech are mainly related to segmental accuracy, those of spontaneous speech appear to be more related to pitch and accent accuracy. Moreover, proficiency in read speech does not always equate to the proficiency in spontaneous speech, and vice versa, with Pearson’s per-speaker correlation score of 0.535.","PeriodicalId":263209,"journal":{"name":"2019 22nd Conference of the Oriental COCOSDA International Committee for the Co-ordination and Standardisation of Speech Databases and Assessment Techniques (O-COCOSDA)","volume":"98 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122246679","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Large Collection of Sentences Read Aloud by Vietnamese Learners of Japanese and Native Speaker's Reverse Shadowings 越南日语学习者大声朗读的大量句子和母语人士的反向阴影
Shintaro Ando, Z. Lin, Tasavat Trisitichoke, Y. Inoue, Fuki Yoshizawa, D. Saito, N. Minematsu
The main objective of language learning is to acquire good communication skills in the target language. From this viewpoint, the primary goal of pronunciation training is to become able to speak in an intelligible-enough or comprehensible-enough pronunciation, not a native-sounding one. However, achieving such pronunciation is still not easy for many learners mainly because of their lack of opportunity to use the language they learn and to receive some feedbacks on intelligibility or comprehensibility from native listeners. In order to solve this problem, the authors previously proposed a novel method of native speakers' reverse shadowing and showed that the degree of inarticulation observed in native speakers' shadowings of learners' utterances can be used to estimate the comprehensibility of learners' speech. One major problem in our previous research however, was that the experiment was done on a relatively small scale; the number of learners was only six. For this reason, in this study, we carried out a larger collection of Japanese utterances read aloud by 60 Vietnamese learners and Japanese native speakers' shadowings of those utterances. An analysis of the subjective ratings done by the native speakers implies that some modifications we made from our previous experiment contribute to making the framework of native speakers' reverse shadowing more pedagogically effective. Further, a preliminary analysis of the recorded shadowings shows good correlations to listeners' perceived shadowability.
语言学习的主要目的是获得良好的目的语交际能力。从这个角度来看,发音训练的主要目标是能够以足够可理解或可理解的发音说话,而不是听起来像母语。然而,对于许多学习者来说,实现这样的发音仍然不容易,主要是因为他们缺乏使用所学语言的机会,也没有从母语听众那里得到一些关于可理解性或可理解性的反馈。为了解决这一问题,作者之前提出了一种新的母语人士反向阴影的方法,并表明母语人士对学习者话语的阴影中观察到的不发音程度可以用来估计学习者言语的可理解性。然而,我们之前研究的一个主要问题是,实验的规模相对较小;学习者的人数只有六人。因此,在本研究中,我们收集了60名越南学习者大声朗读的日语话语,以及日本母语人士对这些话语的模仿。对母语者主观评分的分析表明,我们在之前的实验中所做的一些修改有助于使母语者的反向阴影框架在教学上更有效。此外,对记录的阴影的初步分析显示,与听众感知的阴影能力有良好的相关性。
{"title":"A Large Collection of Sentences Read Aloud by Vietnamese Learners of Japanese and Native Speaker's Reverse Shadowings","authors":"Shintaro Ando, Z. Lin, Tasavat Trisitichoke, Y. Inoue, Fuki Yoshizawa, D. Saito, N. Minematsu","doi":"10.1109/O-COCOSDA46868.2019.9041215","DOIUrl":"https://doi.org/10.1109/O-COCOSDA46868.2019.9041215","url":null,"abstract":"The main objective of language learning is to acquire good communication skills in the target language. From this viewpoint, the primary goal of pronunciation training is to become able to speak in an intelligible-enough or comprehensible-enough pronunciation, not a native-sounding one. However, achieving such pronunciation is still not easy for many learners mainly because of their lack of opportunity to use the language they learn and to receive some feedbacks on intelligibility or comprehensibility from native listeners. In order to solve this problem, the authors previously proposed a novel method of native speakers' reverse shadowing and showed that the degree of inarticulation observed in native speakers' shadowings of learners' utterances can be used to estimate the comprehensibility of learners' speech. One major problem in our previous research however, was that the experiment was done on a relatively small scale; the number of learners was only six. For this reason, in this study, we carried out a larger collection of Japanese utterances read aloud by 60 Vietnamese learners and Japanese native speakers' shadowings of those utterances. An analysis of the subjective ratings done by the native speakers implies that some modifications we made from our previous experiment contribute to making the framework of native speakers' reverse shadowing more pedagogically effective. Further, a preliminary analysis of the recorded shadowings shows good correlations to listeners' perceived shadowability.","PeriodicalId":263209,"journal":{"name":"2019 22nd Conference of the Oriental COCOSDA International Committee for the Co-ordination and Standardisation of Speech Databases and Assessment Techniques (O-COCOSDA)","volume":"33 3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132596172","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Challenges Posed by Voice Interface to Child- Agent Collaborative Storytelling 语音界面对儿童-代理协作讲故事的挑战
Ethel Ong, Junlyn Bryan Alburo, Christine Rachel De Jesus, Luisa Katherine Gilig, Dionne Tiffany Ong
Child-agent collaborative storytelling can be facilitated through text and voice interfaces. Voice interfaces are more intuitive and closely resemble the way people usually relate to one another. This may be attributed to the colloquial characteristics of everyday conversations that do away with rigid linguistic structures typically present in text interfaces, such as observing the use of correct grammar and spelling. However, the capabilities of voice-based interfaces currently available in virtual assistants can lead to failure in communication due to user frustration and confusion when the agent is not providing the needed support, possibly caused by the latter's misinterpretation of the user's input. In such situations, text-based interfaces from messaging applications may be used as an alternative communication channel. In this paper, we provide a comparative analysis of the performance of our collaborative storytelling agent in processing user input by analyzing conversation logs from voice-based interface using Google Assistant, and text-based interface using Google Firebase. To do this, we give a brief overview of the different dialogue strategies employed by our agent, and how these are manifested through the interfaces. We also identify the obstacles posed by incorrect input processing to the collaborative tasks, and offer suggestions on how these challenges can be addressed.
通过文本和语音界面可以促进儿童代理协作讲故事。语音界面更加直观,与人们通常相互联系的方式非常相似。这可能归因于日常对话的口语化特征,这些特征消除了文本界面中典型的严格的语言结构,例如观察正确语法和拼写的使用。然而,当前虚拟助理中可用的基于语音的界面功能可能会导致通信失败,因为当代理没有提供所需的支持时,用户会感到沮丧和困惑,这可能是由于后者对用户输入的误解造成的。在这种情况下,来自消息传递应用程序的基于文本的接口可以用作替代通信通道。在本文中,我们通过分析基于语音界面的Google Assistant和基于文本界面的Google Firebase的会话日志,对我们的协作讲故事代理在处理用户输入方面的性能进行了比较分析。为此,我们简要概述了代理采用的不同对话策略,以及这些策略如何通过接口表现出来。我们还确定了不正确的输入处理对协作任务造成的障碍,并就如何解决这些挑战提出了建议。
{"title":"Challenges Posed by Voice Interface to Child- Agent Collaborative Storytelling","authors":"Ethel Ong, Junlyn Bryan Alburo, Christine Rachel De Jesus, Luisa Katherine Gilig, Dionne Tiffany Ong","doi":"10.1109/O-COCOSDA46868.2019.9041233","DOIUrl":"https://doi.org/10.1109/O-COCOSDA46868.2019.9041233","url":null,"abstract":"Child-agent collaborative storytelling can be facilitated through text and voice interfaces. Voice interfaces are more intuitive and closely resemble the way people usually relate to one another. This may be attributed to the colloquial characteristics of everyday conversations that do away with rigid linguistic structures typically present in text interfaces, such as observing the use of correct grammar and spelling. However, the capabilities of voice-based interfaces currently available in virtual assistants can lead to failure in communication due to user frustration and confusion when the agent is not providing the needed support, possibly caused by the latter's misinterpretation of the user's input. In such situations, text-based interfaces from messaging applications may be used as an alternative communication channel. In this paper, we provide a comparative analysis of the performance of our collaborative storytelling agent in processing user input by analyzing conversation logs from voice-based interface using Google Assistant, and text-based interface using Google Firebase. To do this, we give a brief overview of the different dialogue strategies employed by our agent, and how these are manifested through the interfaces. We also identify the obstacles posed by incorrect input processing to the collaborative tasks, and offer suggestions on how these challenges can be addressed.","PeriodicalId":263209,"journal":{"name":"2019 22nd Conference of the Oriental COCOSDA International Committee for the Co-ordination and Standardisation of Speech Databases and Assessment Techniques (O-COCOSDA)","volume":"94 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127344906","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Fast and Accurate Capitalization and Punctuation for Automatic Speech Recognition Using Transformer and Chunk Merging 基于变换和块合并的快速准确的语音自动识别的大写和标点
B. Nguyen, V. H. Nguyen, Hien Nguyen, Pham Ngoc Phuong, The-Loc Nguyen, Quoc Truong Do, Luong Chi Mai
In recent years, studies on automatic speech recognition (ASR) have shown outstanding results that reach human parity on short speech segments. However, there are still difficulties in standardizing the output of ASR such as capitalization and punctuation restoration for long-speech transcription. The problems obstruct readers to understand the ASR output semantically and also cause difficulties for natural language processing models such as NER, POS and semantic parsing. In this paper, we propose a method to restore the punctuation and capitalization for long-speech ASR transcription. The method is based on Transformer models and chunk merging that allows us to (1), build a single model that performs punctuation and capitalization in one go, and (2), perform decoding in parallel while improving the prediction accuracy. Experiments on British National Corpus showed that the proposed approach outperforms existing methods in both accuracy and decoding speed.
近年来,自动语音识别(ASR)的研究取得了显著的成果,在短语音段上达到了人类的水平。然而,对于长语音转录而言,在规范ASR的输出方面仍存在一些困难,如大写和标点恢复。这些问题阻碍了读者从语义上理解ASR输出,也给NER、POS等自然语言处理模型和语义解析带来了困难。在本文中,我们提出了一种恢复长语音ASR转录中标点和大写的方法。该方法基于Transformer模型和块合并,它允许我们(1)构建一个一次性执行标点和大写的单一模型,以及(2)在提高预测精度的同时并行执行解码。在英国国家语料库上的实验表明,该方法在准确率和解码速度上都优于现有方法。
{"title":"Fast and Accurate Capitalization and Punctuation for Automatic Speech Recognition Using Transformer and Chunk Merging","authors":"B. Nguyen, V. H. Nguyen, Hien Nguyen, Pham Ngoc Phuong, The-Loc Nguyen, Quoc Truong Do, Luong Chi Mai","doi":"10.1109/O-COCOSDA46868.2019.9041202","DOIUrl":"https://doi.org/10.1109/O-COCOSDA46868.2019.9041202","url":null,"abstract":"In recent years, studies on automatic speech recognition (ASR) have shown outstanding results that reach human parity on short speech segments. However, there are still difficulties in standardizing the output of ASR such as capitalization and punctuation restoration for long-speech transcription. The problems obstruct readers to understand the ASR output semantically and also cause difficulties for natural language processing models such as NER, POS and semantic parsing. In this paper, we propose a method to restore the punctuation and capitalization for long-speech ASR transcription. The method is based on Transformer models and chunk merging that allows us to (1), build a single model that performs punctuation and capitalization in one go, and (2), perform decoding in parallel while improving the prediction accuracy. Experiments on British National Corpus showed that the proposed approach outperforms existing methods in both accuracy and decoding speed.","PeriodicalId":263209,"journal":{"name":"2019 22nd Conference of the Oriental COCOSDA International Committee for the Co-ordination and Standardisation of Speech Databases and Assessment Techniques (O-COCOSDA)","volume":"216 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-08-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128874968","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 38
期刊
2019 22nd Conference of the Oriental COCOSDA International Committee for the Co-ordination and Standardisation of Speech Databases and Assessment Techniques (O-COCOSDA)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1