首页 > 最新文献

2008 IEEE Spoken Language Technology Workshop最新文献

英文 中文
An extractive-summarization baseline for the automatic detection of noteworthy utterances in multi-party human-human dialog 基于提取-总结基线的多人对话中值得注意的话语自动检测
Pub Date : 2008-12-01 DOI: 10.1109/SLT.2008.4777869
S. Banerjee, Alexander I. Rudnicky
Our goal is to reduce meeting participants' note-taking effort by automatically identifying utterances whose contents meeting participants are likely to include in their notes. Though note-taking is different from meeting summarization, these two problems are related. In this paper we apply techniques developed in extractive meeting summarization research to the problem of identifying noteworthy utterances. We show that these algorithms achieve an f-measure of 0.14 over a 5-meeting sequence of related meetings. The precision - 0.15 - is triple that of the trivial baseline of simply labeling every utterance as noteworthy. We also introduce the concept of ldquoshow-worthyrdquo utterances - utterances that contain information that could conceivably result in a note. We show that such utterances can be recognized with an 81% accuracy (compared to 53% accuracy of a majority classifier). Further, if non-show-worthy utterances are filtered out, the precision of noteworthiness detection improves by 33% relative.
我们的目标是通过自动识别会议参与者可能在他们的笔记中包含的内容的话语来减少会议参与者做笔记的工作量。虽然记笔记和会议总结不同,但这两个问题是有联系的。在本文中,我们将提取会议摘要研究中发展起来的技术应用于识别值得注意的话语问题。我们表明,这些算法在相关会议的5次会议序列上实现了0.14的f度量。精确度为0.15,是简单地将每个话语标记为值得注意的简单基线的三倍。我们还介绍了ldquoshow-worthy - rdquo话语的概念,这些话语包含的信息可能会导致一个注释。我们表明,这样的话语可以以81%的准确率识别(相比之下,大多数分类器的准确率为53%)。此外,如果过滤掉不值得展示的话语,值得注意的检测精度相对提高了33%。
{"title":"An extractive-summarization baseline for the automatic detection of noteworthy utterances in multi-party human-human dialog","authors":"S. Banerjee, Alexander I. Rudnicky","doi":"10.1109/SLT.2008.4777869","DOIUrl":"https://doi.org/10.1109/SLT.2008.4777869","url":null,"abstract":"Our goal is to reduce meeting participants' note-taking effort by automatically identifying utterances whose contents meeting participants are likely to include in their notes. Though note-taking is different from meeting summarization, these two problems are related. In this paper we apply techniques developed in extractive meeting summarization research to the problem of identifying noteworthy utterances. We show that these algorithms achieve an f-measure of 0.14 over a 5-meeting sequence of related meetings. The precision - 0.15 - is triple that of the trivial baseline of simply labeling every utterance as noteworthy. We also introduce the concept of ldquoshow-worthyrdquo utterances - utterances that contain information that could conceivably result in a note. We show that such utterances can be recognized with an 81% accuracy (compared to 53% accuracy of a majority classifier). Further, if non-show-worthy utterances are filtered out, the precision of noteworthiness detection improves by 33% relative.","PeriodicalId":186876,"journal":{"name":"2008 IEEE Spoken Language Technology Workshop","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129273988","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
Better statistical estimation can benefit all phrases in phrase-based statistical machine translation 在基于短语的统计机器翻译中,更好的统计估计可以使所有短语受益
Pub Date : 2008-12-01 DOI: 10.1109/SLT.2008.4777884
K. Sima'an, M. Mylonakis
The heuristic estimates of conditional phrase translation probabilities are based on frequency counts in a word-aligned parallel corpus. Earlier attempts at more principled estimation using Expectation-Maximization (EM) under perform this heuristic. This paper shows that a recently introduced novel estimator based on smoothing might provide a good alternative. When all phrase pairs are estimated (no length cut-off), this estimator slightly outperforms the heuristic estimator.
条件短语翻译概率的启发式估计是基于一个词对齐的平行语料库中的频率计数。早期尝试使用期望最大化(EM)进行更有原则的估计,但没有执行这种启发式。本文表明,最近引入的一种新的基于平滑的估计器可能提供一个很好的替代方法。当对所有短语对进行估计时(没有长度截止),该估计器的性能略优于启发式估计器。
{"title":"Better statistical estimation can benefit all phrases in phrase-based statistical machine translation","authors":"K. Sima'an, M. Mylonakis","doi":"10.1109/SLT.2008.4777884","DOIUrl":"https://doi.org/10.1109/SLT.2008.4777884","url":null,"abstract":"The heuristic estimates of conditional phrase translation probabilities are based on frequency counts in a word-aligned parallel corpus. Earlier attempts at more principled estimation using Expectation-Maximization (EM) under perform this heuristic. This paper shows that a recently introduced novel estimator based on smoothing might provide a good alternative. When all phrase pairs are estimated (no length cut-off), this estimator slightly outperforms the heuristic estimator.","PeriodicalId":186876,"journal":{"name":"2008 IEEE Spoken Language Technology Workshop","volume":"106 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132189642","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Experiments in speech driven question answering 语音驱动问答实验
Pub Date : 2008-12-01 DOI: 10.1109/SLT.2008.4777846
César González Ferreras, Valentín Cardeñoso-Payo, E. Arnal
In this paper we present a system that allows users to obtain the answer to a given spoken question expressed in natural language. A large vocabulary continuous speech recognizer is used to transcribe the spoken question into text. Then, a question answering engine is used to obtain the answer to the question. Some improvements over the baseline system were proposed in order to adapt the output of the speech recognizer to the question answering engine: capitalized output from the speech recognizer and a language model for questions. System performance was evaluated using a standard question answering test suite from CLEF. Results showed that the proposed approach outperforms the baseline system both in WER and in over-all system accuracy.
在本文中,我们提出了一个系统,允许用户获得用自然语言表达的给定口语问题的答案。使用大词汇量连续语音识别器将口语问题转录成文本。然后,使用问答引擎来获取问题的答案。为了使语音识别器的输出适应问答引擎,在基线系统的基础上提出了一些改进:大写语音识别器的输出和问题的语言模型。使用CLEF的标准问答测试套件评估系统性能。结果表明,所提出的方法在WER和总体系统精度方面都优于基线系统。
{"title":"Experiments in speech driven question answering","authors":"César González Ferreras, Valentín Cardeñoso-Payo, E. Arnal","doi":"10.1109/SLT.2008.4777846","DOIUrl":"https://doi.org/10.1109/SLT.2008.4777846","url":null,"abstract":"In this paper we present a system that allows users to obtain the answer to a given spoken question expressed in natural language. A large vocabulary continuous speech recognizer is used to transcribe the spoken question into text. Then, a question answering engine is used to obtain the answer to the question. Some improvements over the baseline system were proposed in order to adapt the output of the speech recognizer to the question answering engine: capitalized output from the speech recognizer and a language model for questions. System performance was evaluated using a standard question answering test suite from CLEF. Results showed that the proposed approach outperforms the baseline system both in WER and in over-all system accuracy.","PeriodicalId":186876,"journal":{"name":"2008 IEEE Spoken Language Technology Workshop","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129708353","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Automatic labeling of contrastive word pairs from spontaneous spoken english 自发英语口语对比词对的自动标注
Pub Date : 2008-12-01 DOI: 10.1109/SLT.2008.4777850
Leonardo Badino, R. Clark
This paper addresses the problem of automatically labeling contrast in spontaneous spoken speech, where contrast here is meant as a relation that ties two words that explicitly contrast with each other. Detection of contrast is certainly relevant in the analysis of discourse and information structure and also, because of the prosodic correlates of contrast, could play an important role in speech applications, such as text-to-speech synthesis, that need an accurate and discourse context related modeling of prosody. With this prospect we investigate the feasibility of automatic contrast labeling by training and evaluating on the Switchboard corpus a novel contrast tagger, based on support vector machines (SVM), that combines lexical features, syntactic dependencies and WordNet semantic relations.
本文解决了在自发语音中自动标记对比的问题,这里的对比是指将两个明显相反的词联系在一起的关系。对比检测在语篇和信息结构分析中当然是相关的,而且由于对比的韵律相关性,在语音应用中可以发挥重要作用,例如文本到语音合成,需要准确的和语篇上下文相关的韵律建模。在此基础上,本文提出了一种基于支持向量机(SVM)的综合了词汇特征、句法依赖和WordNet语义关系的对比标注器,并通过对其进行训练和评价来研究自动对比标注的可行性。
{"title":"Automatic labeling of contrastive word pairs from spontaneous spoken english","authors":"Leonardo Badino, R. Clark","doi":"10.1109/SLT.2008.4777850","DOIUrl":"https://doi.org/10.1109/SLT.2008.4777850","url":null,"abstract":"This paper addresses the problem of automatically labeling contrast in spontaneous spoken speech, where contrast here is meant as a relation that ties two words that explicitly contrast with each other. Detection of contrast is certainly relevant in the analysis of discourse and information structure and also, because of the prosodic correlates of contrast, could play an important role in speech applications, such as text-to-speech synthesis, that need an accurate and discourse context related modeling of prosody. With this prospect we investigate the feasibility of automatic contrast labeling by training and evaluating on the Switchboard corpus a novel contrast tagger, based on support vector machines (SVM), that combines lexical features, syntactic dependencies and WordNet semantic relations.","PeriodicalId":186876,"journal":{"name":"2008 IEEE Spoken Language Technology Workshop","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129838587","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Experiences designing a voice interface for rural India 为印度农村地区设计语音界面的经验
Pub Date : 2008-12-01 DOI: 10.1109/SLT.2008.4777830
Neil Patel, Sheetal K. Agarwal, Nitendra Rajput, A. A. Nanavati, Paresh Dave, Tapan S. Parikh
In this paper we describe our experiences designing a voice interface in rural India. We outline our design process from initial contextual inquiry to a formal user evaluation, and use this discussion to motivate research guidelines for others designing voice interfaces in developing regions. Our three guidelines are to build around existing information systems, to iterate on the design through user testing, and to explore design alternatives through empirical analysis. We also share some practical lessons learned in designing, implementing, and evaluating information systems for developing regions in general.
在本文中,我们描述了我们在印度农村设计语音界面的经验。我们概述了我们的设计过程,从最初的上下文调查到正式的用户评估,并利用这一讨论来激励其他在发展中地区设计语音界面的研究指南。我们的三个指导方针是围绕现有的信息系统进行构建,通过用户测试对设计进行迭代,并通过经验分析探索设计替代方案。我们还分享了在总体上为发展中地区设计、实施和评估信息系统方面的一些实践经验。
{"title":"Experiences designing a voice interface for rural India","authors":"Neil Patel, Sheetal K. Agarwal, Nitendra Rajput, A. A. Nanavati, Paresh Dave, Tapan S. Parikh","doi":"10.1109/SLT.2008.4777830","DOIUrl":"https://doi.org/10.1109/SLT.2008.4777830","url":null,"abstract":"In this paper we describe our experiences designing a voice interface in rural India. We outline our design process from initial contextual inquiry to a formal user evaluation, and use this discussion to motivate research guidelines for others designing voice interfaces in developing regions. Our three guidelines are to build around existing information systems, to iterate on the design through user testing, and to explore design alternatives through empirical analysis. We also share some practical lessons learned in designing, implementing, and evaluating information systems for developing regions in general.","PeriodicalId":186876,"journal":{"name":"2008 IEEE Spoken Language Technology Workshop","volume":"76 4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130803749","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 21
Word-lattice based spoken-document indexing with standard text indexers 使用标准文本索引器进行基于词格的口语文档索引
Pub Date : 2008-12-01 DOI: 10.1109/SLT.2008.4777898
F. Seide, K. Thambiratnam, Roger Peng Yu
Indexing the spoken content of audio recordings requires automatic speech recognition, which is as of today not reliable. Unlike indexing text, we cannot reliably know from a speech recognizer whether a word is present at a given point in the audio; we can only obtain a probability for it. Correct use of these probabilities significantly improves spoken-document search accuracy. In this paper, we will first describe how to improve accuracy for "web-search style" (AND/phrase) queries into audio, by utilizing speech recognition alternates and word posterior probabilities based on word lattices. Then, we will present an end-to-end approach to doing so using standard text indexers, which by design cannot handle probabilities and unaligned alternates. We present a sequence of approximations that transform the numeric lattice-matching problem into a symbolic text-based one that can be implemented by a commercial full-text indexer. Experiments on a 170-hour lecture set show an accuracy improvement by 30-60% for phrase searches and by 130% for two-term AND queries, compared to indexing linear text.
索引录音的语音内容需要自动语音识别,这在今天是不可靠的。与索引文本不同,我们无法从语音识别器中可靠地知道一个单词是否出现在音频的给定点上;我们只能得到它的概率。正确使用这些概率可以显著提高口语文档搜索的准确性。在本文中,我们将首先描述如何利用基于词格的语音识别交替和词后验概率来提高对音频的“网络搜索风格”(AND/短语)查询的准确性。然后,我们将提出一种使用标准文本索引器的端到端方法,该方法在设计上不能处理概率和未对齐的替换。我们提出了一系列近似值,将数值格匹配问题转换为可由商业全文索引器实现的基于符号文本的问题。在170小时的演讲集上进行的实验表明,与索引线性文本相比,短语搜索的准确率提高了30-60%,两项and查询的准确率提高了130%。
{"title":"Word-lattice based spoken-document indexing with standard text indexers","authors":"F. Seide, K. Thambiratnam, Roger Peng Yu","doi":"10.1109/SLT.2008.4777898","DOIUrl":"https://doi.org/10.1109/SLT.2008.4777898","url":null,"abstract":"Indexing the spoken content of audio recordings requires automatic speech recognition, which is as of today not reliable. Unlike indexing text, we cannot reliably know from a speech recognizer whether a word is present at a given point in the audio; we can only obtain a probability for it. Correct use of these probabilities significantly improves spoken-document search accuracy. In this paper, we will first describe how to improve accuracy for \"web-search style\" (AND/phrase) queries into audio, by utilizing speech recognition alternates and word posterior probabilities based on word lattices. Then, we will present an end-to-end approach to doing so using standard text indexers, which by design cannot handle probabilities and unaligned alternates. We present a sequence of approximations that transform the numeric lattice-matching problem into a symbolic text-based one that can be implemented by a commercial full-text indexer. Experiments on a 170-hour lecture set show an accuracy improvement by 30-60% for phrase searches and by 130% for two-term AND queries, compared to indexing linear text.","PeriodicalId":186876,"journal":{"name":"2008 IEEE Spoken Language Technology Workshop","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123695432","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
A response generation in the Mongolian spoken language system for accessing to multimedia knowledge base 蒙古语口语系统中多媒体知识库访问的响应生成
Pub Date : 2008-12-01 DOI: 10.1109/SLT.2008.4777838
Munkhtuya Davaatsagaan, K. Paliwal
By using automatic speech recognition (ASR) and text to speech (TTS) systems, which have been available in Mongolian for last few years, this research set out to implement a new version of the Mongolian Virtual Education Environment (VEE) that has not included a speech interface. The spoken language system aims to provide a natural interface between trainees and the environment by using simple and natural dialogues to enable the user to access the multimedia knowledge base of the VEE. We have worked on the response generation part of the system. This paper describes a TTS system for the VEE for university courses held in Mongolian. A concatenative speech synthesizer for Mongolian is applied for the TTS in response generation. A Festvox framework for unit selection speech synthesis was used to build the Mongolian voice. We discuss aspects of the voice development process and the results of a perceptual test of the synthesized voice.
通过使用自动语音识别(ASR)和文本到语音(TTS)系统,本研究开始实施蒙古语虚拟教育环境(VEE)的新版本,其中不包括语音接口。口语系统旨在通过简单自然的对话,为受训者和环境提供一个自然的界面,使用户能够访问VEE的多媒体知识库。我们对系统的响应生成部分进行了研究。本文介绍了一种面向蒙古语授课的高校VEE教学的TTS系统。在TTS响应生成中,采用了一种连接式蒙古语语音合成器。利用Festvox单元选择语音合成框架构建蒙古语语音。我们讨论了声音发展过程的各个方面和合成声音的感知测试的结果。
{"title":"A response generation in the Mongolian spoken language system for accessing to multimedia knowledge base","authors":"Munkhtuya Davaatsagaan, K. Paliwal","doi":"10.1109/SLT.2008.4777838","DOIUrl":"https://doi.org/10.1109/SLT.2008.4777838","url":null,"abstract":"By using automatic speech recognition (ASR) and text to speech (TTS) systems, which have been available in Mongolian for last few years, this research set out to implement a new version of the Mongolian Virtual Education Environment (VEE) that has not included a speech interface. The spoken language system aims to provide a natural interface between trainees and the environment by using simple and natural dialogues to enable the user to access the multimedia knowledge base of the VEE. We have worked on the response generation part of the system. This paper describes a TTS system for the VEE for university courses held in Mongolian. A concatenative speech synthesizer for Mongolian is applied for the TTS in response generation. A Festvox framework for unit selection speech synthesis was used to build the Mongolian voice. We discuss aspects of the voice development process and the results of a perceptual test of the synthesized voice.","PeriodicalId":186876,"journal":{"name":"2008 IEEE Spoken Language Technology Workshop","volume":"63 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126601462","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Low-resource speech translation of Urdu to English using semi-supervised part-of-speech tagging and transliteration 使用半监督词性标注和音译的乌尔都语到英语的低资源语音翻译
Pub Date : 2008-12-01 DOI: 10.1109/SLT.2008.4777891
A. Aminzadeh, Wade Shen
This paper describes the construction of ASR and MT systems for translation of speech from Urdu into English. As both Urdu pronunciation lexicons and Urdu-English bitexts are sparse, we employ several techniques that make use of semi-supervised annotation to improve ASR and MT training. Specifically, we describe 1) the construction of a semi-supervised HMM-based part-of-speech tagger that is used to train factored translation models and 2) the use of an HMM-based transliterator from which we derive a spelling-to-pronunciation model for Urdu used in ASR training. We describe experiments performed for both ASR and MT training in the context of the Urdu-to-English task of the NIST MT08 Evaluation and we compare methods making use of additional annotation with standard statistical MT and ASR baselines.
本文介绍了乌尔都语语音翻译系统和机器翻译系统的构建。由于乌尔都语发音词汇和乌尔都语-英语比特文本都是稀疏的,我们采用了几种利用半监督注释的技术来改进ASR和MT训练。具体来说,我们描述了1)基于半监督hmm的词性标注器的构建,用于训练因子翻译模型;2)基于hmm的转写器的使用,我们从中获得了用于ASR训练的乌尔都语拼写到发音模型。我们描述了在NIST MT08评估的乌尔都语-英语任务背景下进行的ASR和MT训练的实验,并将使用附加注释的方法与标准统计MT和ASR基线进行了比较。
{"title":"Low-resource speech translation of Urdu to English using semi-supervised part-of-speech tagging and transliteration","authors":"A. Aminzadeh, Wade Shen","doi":"10.1109/SLT.2008.4777891","DOIUrl":"https://doi.org/10.1109/SLT.2008.4777891","url":null,"abstract":"This paper describes the construction of ASR and MT systems for translation of speech from Urdu into English. As both Urdu pronunciation lexicons and Urdu-English bitexts are sparse, we employ several techniques that make use of semi-supervised annotation to improve ASR and MT training. Specifically, we describe 1) the construction of a semi-supervised HMM-based part-of-speech tagger that is used to train factored translation models and 2) the use of an HMM-based transliterator from which we derive a spelling-to-pronunciation model for Urdu used in ASR training. We describe experiments performed for both ASR and MT training in the context of the Urdu-to-English task of the NIST MT08 Evaluation and we compare methods making use of additional annotation with standard statistical MT and ASR baselines.","PeriodicalId":186876,"journal":{"name":"2008 IEEE Spoken Language Technology Workshop","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121571833","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
期刊
2008 IEEE Spoken Language Technology Workshop
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1