首页 > 最新文献

2017 20th Conference of the Oriental Chapter of the International Coordinating Committee on Speech Databases and Speech I/O Systems and Assessment (O-COCOSDA)最新文献

英文 中文
Development of a multilingual isolated digits speech corpus 多语言孤立数字语料库的开发
Emmanuel Malaay, Michael Simora, R. J. Cabatic, Nathaniel Oco, R. Roxas
We present a multilingual speech corpus for isolated digits. As case study, we focused on languages in the Philippines: English, Filipino, Ilocano, Cebuano, and Spanish. Our isolated digits speech corpus has a duration of almost nine hours, collection from 262 speakers. These data were word- level annotated and will be used to train the acoustic models using the ASR toolkits. The corpus will be used for an automatic speech recognition (ASR) system and therefore the database must be sufficient to develop an ASR system.
我们提出了一个多语言语音语料库孤立的数字。作为案例研究,我们将重点放在菲律宾的语言上:英语、菲律宾语、伊洛卡诺语、宿雾诺语和西班牙语。我们的独立数字语音语料库持续时间近9小时,来自262个说话者。这些数据是词级注释,并将用于使用ASR工具包训练声学模型。语料库将用于自动语音识别(ASR)系统,因此数据库必须足以开发自动语音识别系统。
{"title":"Development of a multilingual isolated digits speech corpus","authors":"Emmanuel Malaay, Michael Simora, R. J. Cabatic, Nathaniel Oco, R. Roxas","doi":"10.1109/ICSDA.2017.8384452","DOIUrl":"https://doi.org/10.1109/ICSDA.2017.8384452","url":null,"abstract":"We present a multilingual speech corpus for isolated digits. As case study, we focused on languages in the Philippines: English, Filipino, Ilocano, Cebuano, and Spanish. Our isolated digits speech corpus has a duration of almost nine hours, collection from 262 speakers. These data were word- level annotated and will be used to train the acoustic models using the ASR toolkits. The corpus will be used for an automatic speech recognition (ASR) system and therefore the database must be sufficient to develop an ASR system.","PeriodicalId":255147,"journal":{"name":"2017 20th Conference of the Oriental Chapter of the International Coordinating Committee on Speech Databases and Speech I/O Systems and Assessment (O-COCOSDA)","volume":"165 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121291966","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Creation of a multi-paraphrase corpus based on various elementary operations 基于各种基本操作的多释义语料库的创建
Johanes Effendi, S. Sakti, Satoshi Nakamura
Paraphrases resemble monolingual translations from a source sentence into other sentences that must preserve the original meaning. To build automatic paraphrasing, a collection of paraphrased expressions is required. However, manually collecting paraphrases is expensive and time-consuming. Most existing paraphrases corpora cover only one-to-one parallel sentences and neglect the fact that possible variants of paraphrases can be generated from a single source sentence. The manipulation applied to the original sentences is also difficult to track. Furthermore, a single corpus is mostly dedicated to a single application that is not reusable in other applications. In this research, we construct a paraphrase corpus based on various elementary operations (reordering, substitution, deletion, insertion) in a crowdsourcing platform to generate multi- paraphrase sentences from a source sentence. These elementary paraphrase operations can be utilized for various applications (i.e., deletion for summarization and reordering for machine translation). Our evaluations show the richness and effectiveness of our created corpus.
意译类似于从一个源句子到其他句子的单语翻译,必须保持原意。要构建自动释义,需要一个释义表达式的集合。然而,手动收集释义既昂贵又耗时。大多数现有的意译语料库只涵盖了一对一的平行句子,而忽略了一个事实,即意译的可能变体可以从一个单一的源句中产生。对原始句子的篡改也很难追踪。此外,单个语料库主要专用于单个应用程序,不能在其他应用程序中重用。在本研究中,我们在一个众包平台上构建了一个基于各种基本操作(重新排序、替换、删除、插入)的释义语料库,从一个源句子生成多个释义句子。这些基本的释义操作可以用于各种应用程序(例如,删除摘要和重新排序机器翻译)。我们的评估显示了我们创建的语料库的丰富性和有效性。
{"title":"Creation of a multi-paraphrase corpus based on various elementary operations","authors":"Johanes Effendi, S. Sakti, Satoshi Nakamura","doi":"10.1109/ICSDA.2017.8384465","DOIUrl":"https://doi.org/10.1109/ICSDA.2017.8384465","url":null,"abstract":"Paraphrases resemble monolingual translations from a source sentence into other sentences that must preserve the original meaning. To build automatic paraphrasing, a collection of paraphrased expressions is required. However, manually collecting paraphrases is expensive and time-consuming. Most existing paraphrases corpora cover only one-to-one parallel sentences and neglect the fact that possible variants of paraphrases can be generated from a single source sentence. The manipulation applied to the original sentences is also difficult to track. Furthermore, a single corpus is mostly dedicated to a single application that is not reusable in other applications. In this research, we construct a paraphrase corpus based on various elementary operations (reordering, substitution, deletion, insertion) in a crowdsourcing platform to generate multi- paraphrase sentences from a source sentence. These elementary paraphrase operations can be utilized for various applications (i.e., deletion for summarization and reordering for machine translation). Our evaluations show the richness and effectiveness of our created corpus.","PeriodicalId":255147,"journal":{"name":"2017 20th Conference of the Oriental Chapter of the International Coordinating Committee on Speech Databases and Speech I/O Systems and Assessment (O-COCOSDA)","volume":"77 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124202879","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
A standardization program of speech corpus collection 语音语料库收集的标准化程序
Zhigang Yin, Ai-jun Li
The speech corpus is the basis of linguistic research and natural language processing. In order to make the speech corpus be collected more efficiently and be used or shared easier, it is necessary to develop the standardization scheme for speech corpus project. This paper tries to provide a standardization program that covers all aspects of data collection, annotation, and distribution. The specifications of constructing a speech corpus are also introduced in the paper. Finally, a telephone speech corpus, TSC973, be exemplified to illuminate the standardization program.
语音语料库是语言学研究和自然语言处理的基础。为了提高语音语料库的采集效率和使用共享效率,有必要制定语音语料库项目的标准化方案。本文试图提供一个涵盖数据收集、注释和分发的所有方面的标准化程序。本文还介绍了构建语音语料库的规范。最后,以电话语音语料库TSC973为例说明了标准化方案。
{"title":"A standardization program of speech corpus collection","authors":"Zhigang Yin, Ai-jun Li","doi":"10.1109/ICSDA.2017.8384471","DOIUrl":"https://doi.org/10.1109/ICSDA.2017.8384471","url":null,"abstract":"The speech corpus is the basis of linguistic research and natural language processing. In order to make the speech corpus be collected more efficiently and be used or shared easier, it is necessary to develop the standardization scheme for speech corpus project. This paper tries to provide a standardization program that covers all aspects of data collection, annotation, and distribution. The specifications of constructing a speech corpus are also introduced in the paper. Finally, a telephone speech corpus, TSC973, be exemplified to illuminate the standardization program.","PeriodicalId":255147,"journal":{"name":"2017 20th Conference of the Oriental Chapter of the International Coordinating Committee on Speech Databases and Speech I/O Systems and Assessment (O-COCOSDA)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122930730","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Multiresolution CNN for reverberant speech recognition 多分辨率CNN混响语音识别
Sunchan Park, Yongwon Jeong, H. S. Kim
The performance of automatic speech recognition (ASR) has been greatly improved by deep neural network (DNN) acoustic models. However, DNN-based systems still perform poorly in reverberant environments. Convolutional neural network (CNN) acoustic models showed lower word error rate (WER) in distant speech recognition than fully-connected DNN acoustic models. To improve the performance of reverberant speech recognition using CNN acoustic models, we propose the multiresolution CNN that has two separate streams: one is the wideband feature with wide-context window and the other is the narrowband feature with narrow-context window. The experiments on the ASR task of the REVERB challenge 2014 showed that the proposed multiresolution CNN based approach reduced the WER by 8.79% and 8.83% for the simulated test data and the real-condition test data, respectively, compared with the conventional CNN based method.
深度神经网络声学模型极大地提高了自动语音识别(ASR)的性能。然而,基于dnn的系统在混响环境中仍然表现不佳。卷积神经网络(CNN)声学模型在远端语音识别中的单词错误率(WER)低于全连接DNN声学模型。为了提高使用CNN声学模型进行混响语音识别的性能,我们提出了具有两个独立流的多分辨率CNN:一个是具有宽上下文窗口的宽带特征,另一个是具有窄上下文窗口的窄带特征。在REVERB challenge 2014的ASR任务上进行的实验表明,与传统的基于CNN的方法相比,本文提出的基于多分辨率CNN的方法对模拟测试数据和真实条件测试数据的WER分别降低了8.79%和8.83%。
{"title":"Multiresolution CNN for reverberant speech recognition","authors":"Sunchan Park, Yongwon Jeong, H. S. Kim","doi":"10.1109/ICSDA.2017.8384470","DOIUrl":"https://doi.org/10.1109/ICSDA.2017.8384470","url":null,"abstract":"The performance of automatic speech recognition (ASR) has been greatly improved by deep neural network (DNN) acoustic models. However, DNN-based systems still perform poorly in reverberant environments. Convolutional neural network (CNN) acoustic models showed lower word error rate (WER) in distant speech recognition than fully-connected DNN acoustic models. To improve the performance of reverberant speech recognition using CNN acoustic models, we propose the multiresolution CNN that has two separate streams: one is the wideband feature with wide-context window and the other is the narrowband feature with narrow-context window. The experiments on the ASR task of the REVERB challenge 2014 showed that the proposed multiresolution CNN based approach reduced the WER by 8.79% and 8.83% for the simulated test data and the real-condition test data, respectively, compared with the conventional CNN based method.","PeriodicalId":255147,"journal":{"name":"2017 20th Conference of the Oriental Chapter of the International Coordinating Committee on Speech Databases and Speech I/O Systems and Assessment (O-COCOSDA)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131555455","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 18
Modeling of linguistic and acoustic information from speech signal for multilingual spoken language identification system (SLID) 多语言语音识别系统中语音信号的语言和声学信息建模
S. Bansal, S. Agrawal
Spoken language identification is the task of identifying a language from the given speech signal. Efforts to develop language identification systems for Indian languages have been very limited due to the problem of speaker availability and language legibility but the requirement of SLID is increasing for civil and defense applications day by day. The present paper reports a study to develop a multilingual identification system for two Indian languages i.e. Hindi and Manipuri by using PPRLM approach that requires phoneme based labeled speech corpus for each language. For each language, data set of 300 phonetically rich sentences spoken by 25 native speakers (15000 utterances) were recorded, analyzed and annotated phonemically to make trigram based phonotactic model. The features of the speech signal have been extracted using MFCCs and GMM was used as a classifier. Results show that accuracy increases with the increase of Gaussians and also with the training samples.
口语识别是从给定的语音信号中识别语言的任务。由于讲话者的可用性和语言的易读性问题,为印度语言开发语言识别系统的努力非常有限,但民用和国防应用对滑动识别系统的要求日益增加。本文报告了一项研究,利用PPRLM方法开发印地语和曼尼普尔语两种印度语言的多语言识别系统,该方法需要基于音素的标记语音语料库。对于每种语言,记录25名母语使用者的300个语音丰富的句子(15000个发音)数据集,对其进行语音分析和标注,构建基于三字母表的语音致音模型。使用mfcc提取语音信号的特征,并使用GMM作为分类器。结果表明,准确率随高斯分布的增加而增加,随训练样本的增加而增加。
{"title":"Modeling of linguistic and acoustic information from speech signal for multilingual spoken language identification system (SLID)","authors":"S. Bansal, S. Agrawal","doi":"10.1109/ICSDA.2017.8384468","DOIUrl":"https://doi.org/10.1109/ICSDA.2017.8384468","url":null,"abstract":"Spoken language identification is the task of identifying a language from the given speech signal. Efforts to develop language identification systems for Indian languages have been very limited due to the problem of speaker availability and language legibility but the requirement of SLID is increasing for civil and defense applications day by day. The present paper reports a study to develop a multilingual identification system for two Indian languages i.e. Hindi and Manipuri by using PPRLM approach that requires phoneme based labeled speech corpus for each language. For each language, data set of 300 phonetically rich sentences spoken by 25 native speakers (15000 utterances) were recorded, analyzed and annotated phonemically to make trigram based phonotactic model. The features of the speech signal have been extracted using MFCCs and GMM was used as a classifier. Results show that accuracy increases with the increase of Gaussians and also with the training samples.","PeriodicalId":255147,"journal":{"name":"2017 20th Conference of the Oriental Chapter of the International Coordinating Committee on Speech Databases and Speech I/O Systems and Assessment (O-COCOSDA)","volume":"187 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127592311","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
AISHELL-1: An open-source Mandarin speech corpus and a speech recognition baseline 一个开源的汉语语音语料库和语音识别基线
Hui Bu, Jiayu Du, Xingyu Na, Bengu Wu, Hao Zheng
An open-source Mandarin speech corpus called AISHELL-1 is released. It is by far the largest corpus which is suitable for conducting the speech recognition research and building speech recognition systems for Mandarin. The recording procedure, including audio capturing devices and environments are presented in details. The preparation of the related resources, including transcriptions and lexicon are described. The corpus is released with a Kaldi recipe. Experimental results implies that the quality of audio recordings and transcriptions are promising.
一个名为aishhell -1的开源普通话语音语料库发布。它是迄今为止最适合进行语音识别研究和构建普通话语音识别系统的语料库。详细介绍了录音过程,包括音频捕获设备和环境。描述了相关资源的准备,包括转录和词典。语料库与卡尔迪食谱一起发布。实验结果表明,录音和转录的质量是有希望的。
{"title":"AISHELL-1: An open-source Mandarin speech corpus and a speech recognition baseline","authors":"Hui Bu, Jiayu Du, Xingyu Na, Bengu Wu, Hao Zheng","doi":"10.1109/ICSDA.2017.8384449","DOIUrl":"https://doi.org/10.1109/ICSDA.2017.8384449","url":null,"abstract":"An open-source Mandarin speech corpus called AISHELL-1 is released. It is by far the largest corpus which is suitable for conducting the speech recognition research and building speech recognition systems for Mandarin. The recording procedure, including audio capturing devices and environments are presented in details. The preparation of the related resources, including transcriptions and lexicon are described. The corpus is released with a Kaldi recipe. Experimental results implies that the quality of audio recordings and transcriptions are promising.","PeriodicalId":255147,"journal":{"name":"2017 20th Conference of the Oriental Chapter of the International Coordinating Committee on Speech Databases and Speech I/O Systems and Assessment (O-COCOSDA)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122518119","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 585
Phone-aware neural language identification 手机感知神经语言识别
Zhiyuan Tang, Dong Wang, Yixiang Chen, Ying Shi, Lantian Li
Pure acoustic neural models, particularly the LSTM-RNN model, have shown great potential in language identification (LID). However, the phonetic information has been largely overlooked by most of existing neural LID models, although this information has been used in the conventional phonetic LID systems with a great success. We present a phone- aware neural LID architecture, which is a deep LSTM-RNN LID system but accepts output from an RNN-based ASR system. By utilizing the phonetic knowledge, the LID performance can be significantly improved. Interestingly, even if the test language is not involved in the ASR training, the phonetic knowledge still presents a large contribution. Our experiments conducted on four languages within the Babel corpus demonstrated that the phone-aware approach is highly effective.
纯声学神经网络模型,特别是LSTM-RNN模型,在语言识别(LID)中显示出巨大的潜力。然而,尽管语音信息已经在传统的语音LID系统中得到了成功的应用,但大多数现有的神经LID模型在很大程度上忽略了语音信息。我们提出了一个手机感知神经LID架构,它是一个深度LSTM-RNN LID系统,但接受基于rnn的ASR系统的输出。通过利用语音知识,可以显著提高语音识别的性能。有趣的是,即使测试语言不参与ASR训练,语音知识仍然有很大的贡献。我们在Babel语料库中的四种语言上进行的实验表明,电话感知方法非常有效。
{"title":"Phone-aware neural language identification","authors":"Zhiyuan Tang, Dong Wang, Yixiang Chen, Ying Shi, Lantian Li","doi":"10.1109/ICSDA.2017.8384445","DOIUrl":"https://doi.org/10.1109/ICSDA.2017.8384445","url":null,"abstract":"Pure acoustic neural models, particularly the LSTM-RNN model, have shown great potential in language identification (LID). However, the phonetic information has been largely overlooked by most of existing neural LID models, although this information has been used in the conventional phonetic LID systems with a great success. We present a phone- aware neural LID architecture, which is a deep LSTM-RNN LID system but accepts output from an RNN-based ASR system. By utilizing the phonetic knowledge, the LID performance can be significantly improved. Interestingly, even if the test language is not involved in the ASR training, the phonetic knowledge still presents a large contribution. Our experiments conducted on four languages within the Babel corpus demonstrated that the phone-aware approach is highly effective.","PeriodicalId":255147,"journal":{"name":"2017 20th Conference of the Oriental Chapter of the International Coordinating Committee on Speech Databases and Speech I/O Systems and Assessment (O-COCOSDA)","volume":"252 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-05-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124163995","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
期刊
2017 20th Conference of the Oriental Chapter of the International Coordinating Committee on Speech Databases and Speech I/O Systems and Assessment (O-COCOSDA)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1