首页 > 最新文献

Workshop on Spoken Language Technologies for Under-resourced Languages最新文献

英文 中文
Diarization in Maximally Ecological Recordings: Data from Tsimane Children 最大限度的生态记录:来自提斯曼儿童的数据
Pub Date : 2018-08-29 DOI: 10.21437/SLTU.2018-7
Julien Karadayi, Camila Scaff, Alejandrina Cristia
Daylong recordings may be the most naturalistic and least invasive way to collect speech data, sampling all potential language use contexts, with a device that is unobtrusive enough to have little effect on people’s behaviors. As a result, this technology is relevant for studying diverse languages, including understudied languages in remote settings – provided we can apply effective unsupervised analyses procedures. In this paper, we analyze in detail results from applying an open source package (DiViMe) and a proprietary alternative (LENA ), onto clips periodically sampled from daylong recorders worn by Tsimane children of the Bolivian Amazon (age range: 6-68 months; recording time/child range: 4-22h). Detailed analyses showed the open source package fared no worse than the proprietary alternative. However, performance was overall rather dismal. We suggest promising directions for improvements based on analyses of variation in performance within our corpus.
全天录音可能是收集语音数据的最自然、最不具侵入性的方式,它可以对所有潜在的语言使用环境进行采样,而且这种设备不太显眼,对人们的行为几乎没有影响。因此,这项技术适用于学习各种语言,包括在远程环境中学习不足的语言——前提是我们可以应用有效的无监督分析程序。在本文中,我们详细分析了将开源包(DiViMe)和专有替代方案(LENA)应用于玻利维亚亚马逊地区Tsimane儿童(年龄范围:6-68个月;记录时间/孩子范围:4-22h)。详细的分析表明,开放源码包的表现并不比专有的替代品差。然而,整体表现相当糟糕。基于对语料库中性能变化的分析,我们提出了有希望的改进方向。
{"title":"Diarization in Maximally Ecological Recordings: Data from Tsimane Children","authors":"Julien Karadayi, Camila Scaff, Alejandrina Cristia","doi":"10.21437/SLTU.2018-7","DOIUrl":"https://doi.org/10.21437/SLTU.2018-7","url":null,"abstract":"Daylong recordings may be the most naturalistic and least invasive way to collect speech data, sampling all potential language use contexts, with a device that is unobtrusive enough to have little effect on people’s behaviors. As a result, this technology is relevant for studying diverse languages, including understudied languages in remote settings – provided we can apply effective unsupervised analyses procedures. In this paper, we analyze in detail results from applying an open source package (DiViMe) and a proprietary alternative (LENA ), onto clips periodically sampled from daylong recorders worn by Tsimane children of the Bolivian Amazon (age range: 6-68 months; recording time/child range: 4-22h). Detailed analyses showed the open source package fared no worse than the proprietary alternative. However, performance was overall rather dismal. We suggest promising directions for improvements based on analyses of variation in performance within our corpus.","PeriodicalId":190269,"journal":{"name":"Workshop on Spoken Language Technologies for Under-resourced Languages","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122732112","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Improving ASR Output for Endangered Language Documentation 改进濒危语言文献的ASR输出
Pub Date : 2018-08-29 DOI: 10.21437/SLTU.2018-39
Robert Jimerson, Kruthika Simha, R. Ptucha, Emily Tucker Prud'hommeaux
Documenting endangered languages supports the historical preservation of diverse cultures. Automatic speech recognition (ASR), while potentially very useful for this task, has been underutilized for language documentation due to the challenges inherent in building robust models from extremely limited audio and text training resources. In this paper, we explore the utility of supplementing existing training resources using synthetic data, with a focus on Seneca, a morphologically complex endangered language of North America. We use transfer learning to train acoustic models using both the small amount of available acoustic training data and artificially distorted copies of that data. We then supplement the language model training data with verb forms generated by rule and sentences produced by an LSTM trained on the available text data. The addition of synthetic data yields reductions in word error rate, demonstrating the promise of data augmentation for this task.
{"title":"Improving ASR Output for Endangered Language Documentation","authors":"Robert Jimerson, Kruthika Simha, R. Ptucha, Emily Tucker Prud'hommeaux","doi":"10.21437/SLTU.2018-39","DOIUrl":"https://doi.org/10.21437/SLTU.2018-39","url":null,"abstract":"Documenting endangered languages supports the historical preservation of diverse cultures. Automatic speech recognition (ASR), while potentially very useful for this task, has been underutilized for language documentation due to the challenges inherent in building robust models from extremely limited audio and text training resources. In this paper, we explore the utility of supplementing existing training resources using synthetic data, with a focus on Seneca, a morphologically complex endangered language of North America. We use transfer learning to train acoustic models using both the small amount of available acoustic training data and artificially distorted copies of that data. We then supplement the language model training data with verb forms generated by rule and sentences produced by an LSTM trained on the available text data. The addition of synthetic data yields reductions in word error rate, demonstrating the promise of data augmentation for this task.","PeriodicalId":190269,"journal":{"name":"Workshop on Spoken Language Technologies for Under-resourced Languages","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132950836","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
A Step-by-Step Process for Building TTS Voices Using Open Source Data and Frameworks for Bangla, Javanese, Khmer, Nepali, Sinhala, and Sundanese 使用开源数据和框架为孟加拉语、爪哇语、高棉语、尼泊尔语、僧伽罗语和巽他语逐步建立TTS声音的过程
Pub Date : 2018-08-29 DOI: 10.21437/SLTU.2018-14
Keshan Sanjaya Sodimana, Pasindu De Silva, Supheakmungkol Sarin, Oddur Kjartansson, Martin Jansche, Knot Pipatsrisawat, Linne Ha
The availability of language resources is vital for the development of text-to-speech (TTS) systems. Thus, open source resources are highly beneficial for TTS research communities focused on low-resourced languages. In this paper, we present data sets for 6 low-resourced languages that we open sourced to the public. The data sets consist of audio files, pronunciation lexicons, and phonology definitions for Bangla, Javanese, Khmer, Nepali, Sinhala, and Sundanese. These data sets are sufficient for building voices in these languages. We also describe a recipe for building a new TTS voice using our data together with openly available resources and tools.
语言资源的可用性对于文本到语音(TTS)系统的发展至关重要。因此,开源资源对于专注于资源匮乏语言的TTS研究社区非常有益。在这篇论文中,我们展示了6种低资源语言的数据集,我们向公众开放了它们的源代码。数据集包括音频文件、发音词汇和孟加拉语、爪哇语、高棉语、尼泊尔语、僧伽罗语和巽他语的音韵定义。这些数据集足以构建这些语言的语音。我们还描述了使用我们的数据以及公开可用的资源和工具构建新的TTS语音的方法。
{"title":"A Step-by-Step Process for Building TTS Voices Using Open Source Data and Frameworks for Bangla, Javanese, Khmer, Nepali, Sinhala, and Sundanese","authors":"Keshan Sanjaya Sodimana, Pasindu De Silva, Supheakmungkol Sarin, Oddur Kjartansson, Martin Jansche, Knot Pipatsrisawat, Linne Ha","doi":"10.21437/SLTU.2018-14","DOIUrl":"https://doi.org/10.21437/SLTU.2018-14","url":null,"abstract":"The availability of language resources is vital for the development of text-to-speech (TTS) systems. Thus, open source resources are highly beneficial for TTS research communities focused on low-resourced languages. In this paper, we present data sets for 6 low-resourced languages that we open sourced to the public. The data sets consist of audio files, pronunciation lexicons, and phonology definitions for Bangla, Javanese, Khmer, Nepali, Sinhala, and Sundanese. These data sets are sufficient for building voices in these languages. We also describe a recipe for building a new TTS voice using our data together with openly available resources and tools.","PeriodicalId":190269,"journal":{"name":"Workshop on Spoken Language Technologies for Under-resourced Languages","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122705496","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 31
Building Speech Recognition Systems for Language Documentation: The CoEDL Endangered Language Pipeline and Inference System (ELPIS) 构建语言文档的语音识别系统:CoEDL濒危语言管道和推理系统(ELPIS)
Pub Date : 2018-08-29 DOI: 10.21437/SLTU.2018-43
Ben Foley, Joshua T. Arnold, Rolando Coto-Solano, Gautier Durantin, T. M. Ellison, D. Esch, Scott Heath, Frantisek Kratochvíl, Zara Maxwell-Smith, David Nash, Ola Olsson, Mark Richards, Nay San, H. Stoakes, N. Thieberger, Janet Wiles
{"title":"Building Speech Recognition Systems for Language Documentation: The CoEDL Endangered Language Pipeline and Inference System (ELPIS)","authors":"Ben Foley, Joshua T. Arnold, Rolando Coto-Solano, Gautier Durantin, T. M. Ellison, D. Esch, Scott Heath, Frantisek Kratochvíl, Zara Maxwell-Smith, David Nash, Ola Olsson, Mark Richards, Nay San, H. Stoakes, N. Thieberger, Janet Wiles","doi":"10.21437/SLTU.2018-43","DOIUrl":"https://doi.org/10.21437/SLTU.2018-43","url":null,"abstract":"","PeriodicalId":190269,"journal":{"name":"Workshop on Spoken Language Technologies for Under-resourced Languages","volume":"104 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127820332","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Evaluating Code-Switched Malay-English Speech Using Time Delay Neural Networks. 用时滞神经网络评价马来语-英语语码转换语音。
Pub Date : 2018-08-29 DOI: 10.21437/SLTU.2018-40
Anand Singh, Tien-Ping Tan
This paper presents a new baseline for Malay-English code-switched speech corpus; which is constructed using a factored form of time delay neural networks (TDNN-F), which reflected a significant relative percentage reduction of 28.07% in the word-error rate (WER), as compared to the Gaussian Mixture Model-Hidden Markov Model (GMM-HMM). The presented results also confirm the effectiveness of time delay neural networks (TDNNs) for code-switched speech.
本文提出了马来语-英语语码转换语料库的新基线;该模型使用因子形式的时滞神经网络(TDNN-F)构建,与高斯混合模型-隐马尔可夫模型(GMM-HMM)相比,单词错误率(WER)显著降低了28.07%。本文的研究结果也证实了延时神经网络(TDNNs)在编码切换语音中的有效性。
{"title":"Evaluating Code-Switched Malay-English Speech Using Time Delay Neural Networks.","authors":"Anand Singh, Tien-Ping Tan","doi":"10.21437/SLTU.2018-40","DOIUrl":"https://doi.org/10.21437/SLTU.2018-40","url":null,"abstract":"This paper presents a new baseline for Malay-English code-switched speech corpus; which is constructed using a factored form of time delay neural networks (TDNN-F), which reflected a significant relative percentage reduction of 28.07% in the word-error rate (WER), as compared to the Gaussian Mixture Model-Hidden Markov Model (GMM-HMM). The presented results also confirm the effectiveness of time delay neural networks (TDNNs) for code-switched speech.","PeriodicalId":190269,"journal":{"name":"Workshop on Spoken Language Technologies for Under-resourced Languages","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115173815","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Building a Natural Sounding Text-to-Speech System for the Nepali Language - Research and Development Challenges and Solutions 尼泊尔语自然发声文转语音系统的构建——研究与开发挑战与解决方案
Pub Date : 2018-08-29 DOI: 10.21437/SLTU.2018-32
Roop Bajracharya, S. Regmi, B. Bal, Balaram Prasain
Text-to-Speech (TTS) synthesis has come far from its primitive synthetic monotone voices to more natural and intelligible sounding voices. One of the direct applications of a natural sounding TTS systems is the screen reader applications for the visually impaired and the blind community. The Festival Speech Synthesis System uses a concatenative speech synthesis method together with the unit selection process to generate a natural sounding voice. This work primarily gives an account of the efforts put towards developing a Natural sounding TTS system for Nepali using the Festival system. We also shed light on the issues faced and the solutions derived which can be quite overlapping across other similar under-resourced languages in the region.
文本到语音(TTS)的合成已经从原始的合成单调的声音发展到更自然、更容易理解的声音。自然声音TTS系统的直接应用之一是视障人士和盲人的屏幕阅读器应用。节日语音合成系统采用串联语音合成方法,结合单元选择过程,生成自然的声音。这项工作主要介绍了利用节日系统为尼泊尔人开发自然声音TTS系统所做的努力。我们还阐明了所面临的问题和衍生的解决方案,这些问题可能与该地区其他类似资源不足的语言有很大的重叠。
{"title":"Building a Natural Sounding Text-to-Speech System for the Nepali Language - Research and Development Challenges and Solutions","authors":"Roop Bajracharya, S. Regmi, B. Bal, Balaram Prasain","doi":"10.21437/SLTU.2018-32","DOIUrl":"https://doi.org/10.21437/SLTU.2018-32","url":null,"abstract":"Text-to-Speech (TTS) synthesis has come far from its primitive synthetic monotone voices to more natural and intelligible sounding voices. One of the direct applications of a natural sounding TTS systems is the screen reader applications for the visually impaired and the blind community. The Festival Speech Synthesis System uses a concatenative speech synthesis method together with the unit selection process to generate a natural sounding voice. This work primarily gives an account of the efforts put towards developing a Natural sounding TTS system for Nepali using the Festival system. We also shed light on the issues faced and the solutions derived which can be quite overlapping across other similar under-resourced languages in the region.","PeriodicalId":190269,"journal":{"name":"Workshop on Spoken Language Technologies for Under-resourced Languages","volume":"59 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114612157","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Corpus Construction and Semantic Analysis of Indonesian Image Description 印尼语图像描述的语料库构建与语义分析
Pub Date : 2018-08-29 DOI: 10.21437/SLTU.2018-9
Khumaisa Nur'Aini, Johanes Effendi, S. Sakti, M. Adriani, Satoshi Nakamura
Understanding language grounded in visual content is a challenging problem that has raised interest in both the computer vision and natural language processing communities. Flickr30k, which is one of the corpora that have become a standard benchmark to study sentence-based image description, was initially limited to English descriptions, but it has been extended to German, French, and Czech. This paper describes our construction of an image description dataset in the Indonesian language. We translated English descriptions from the Flickr30K dataset into Indonesian with automatic machine translation and performed human validation for the portion of the result. We then constructed Indonesian image descriptions of 10k images by crowdsourcing without English descriptions or translations, and found semantic differences between translations and descriptions. We conclude that the cultural differences between the native speakers of English and Indonesian create different perceptions for constructing natural language expressions that describe an image.
理解基于视觉内容的语言是一个具有挑战性的问题,它引起了计算机视觉和自然语言处理社区的兴趣。Flickr30k是研究基于句子的图像描述的标准基准语料库之一,最初仅限于英语描述,但现已扩展到德语、法语和捷克语。本文描述了我们用印尼语构建的图像描述数据集。我们使用自动机器翻译将来自Flickr30K数据集的英语描述翻译成印尼语,并对部分结果进行人工验证。然后,我们在没有英文描述或翻译的情况下,通过众包的方式构建了10k张图片的印尼语图像描述,并发现了翻译和描述之间的语义差异。我们得出的结论是,英语和印度尼西亚语母语人士之间的文化差异,在构建描述图像的自然语言表达时产生了不同的感知。
{"title":"Corpus Construction and Semantic Analysis of Indonesian Image Description","authors":"Khumaisa Nur'Aini, Johanes Effendi, S. Sakti, M. Adriani, Satoshi Nakamura","doi":"10.21437/SLTU.2018-9","DOIUrl":"https://doi.org/10.21437/SLTU.2018-9","url":null,"abstract":"Understanding language grounded in visual content is a challenging problem that has raised interest in both the computer vision and natural language processing communities. Flickr30k, which is one of the corpora that have become a standard benchmark to study sentence-based image description, was initially limited to English descriptions, but it has been extended to German, French, and Czech. This paper describes our construction of an image description dataset in the Indonesian language. We translated English descriptions from the Flickr30K dataset into Indonesian with automatic machine translation and performed human validation for the portion of the result. We then constructed Indonesian image descriptions of 10k images by crowdsourcing without English descriptions or translations, and found semantic differences between translations and descriptions. We conclude that the cultural differences between the native speakers of English and Indonesian create different perceptions for constructing natural language expressions that describe an image.","PeriodicalId":190269,"journal":{"name":"Workshop on Spoken Language Technologies for Under-resourced Languages","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114874452","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Development of IIITH Hindi-English Code Mixed Speech Database IIITH印英码混合语音数据库的开发
Pub Date : 2018-08-29 DOI: 10.21437/SLTU.2018-23
B. Rambabu, S. Gangashetty
This paper presents the design and development of IIITH Hindi-English code mixed (IIITH-HE-CM) text and corresponding speech corpus. The corpus is collected from several Hindi native speakers from different geographical parts of India. The IIITH-HE-CM corpus has phonetically balanced code mixed sentences with all the phoneme coverage of Hindi and English languages. We used triphone frequency of word internal triphone sequence, consists the language specific information, which helps in code mixed speech recognition and language modelling. The code mixed sentences are written in Devanagari script. Since computers can recognize Roman symbols, we used Indian Language Speech Sound Label (ILSL) transcription. An acoustic model is built for Hindi-English mixed language in-stead of language-dependent models. A large vocabulary code-mixing speech recognition system is developed based on a deep neural network (DNN) architecture. The proposed code-mixed speech recognition system attains low word error rate (WER) compared to conventional system.
本文介绍了IIITH印英混合码(IIITH- he - cm)文本和相应语音语料库的设计与开发。该语料库收集了来自印度不同地理区域的几个印地语母语人士。IIITH-HE-CM语料库具有语音平衡的代码混合句子,具有印地语和英语语言的所有音素覆盖。我们使用三音频率的词内部三音序列,组成语言的特定信息,这有助于代码混合语音识别和语言建模。代码混合语句是用梵文书写的。由于计算机可以识别罗马符号,我们使用了印度语言语音标签(ILSL)转录。在此基础上,建立了一种针对印英混合语言的声学模型,而不是基于语言的模型。提出了一种基于深度神经网络(DNN)的大词汇量混码语音识别系统。与传统语音识别系统相比,该系统具有较低的单词错误率。
{"title":"Development of IIITH Hindi-English Code Mixed Speech Database","authors":"B. Rambabu, S. Gangashetty","doi":"10.21437/SLTU.2018-23","DOIUrl":"https://doi.org/10.21437/SLTU.2018-23","url":null,"abstract":"This paper presents the design and development of IIITH Hindi-English code mixed (IIITH-HE-CM) text and corresponding speech corpus. The corpus is collected from several Hindi native speakers from different geographical parts of India. The IIITH-HE-CM corpus has phonetically balanced code mixed sentences with all the phoneme coverage of Hindi and English languages. We used triphone frequency of word internal triphone sequence, consists the language specific information, which helps in code mixed speech recognition and language modelling. The code mixed sentences are written in Devanagari script. Since computers can recognize Roman symbols, we used Indian Language Speech Sound Label (ILSL) transcription. An acoustic model is built for Hindi-English mixed language in-stead of language-dependent models. A large vocabulary code-mixing speech recognition system is developed based on a deep neural network (DNN) architecture. The proposed code-mixed speech recognition system attains low word error rate (WER) compared to conventional system.","PeriodicalId":190269,"journal":{"name":"Workshop on Spoken Language Technologies for Under-resourced Languages","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127206674","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Relative Phase Shift Features for Replay Spoof Detection System 重放欺骗检测系统的相对相移特性
Pub Date : 2018-08-29 DOI: 10.21437/SLTU.2018-21
Srinivas Kantheti, H. Patil
The replay spoofing tries to fool the Automatic Speaker Verification (ASV) system by the recordings of a genuine utterance. Most of the studies have used magnitude-based features and ignored phase-based features for replay detection. However, the phase-based features also affected due to the environmental characteristics during recording. Hence, the phase-based features, such as parameterized Relative Phase Shift (RPS) and Modified Group Delay are used in this paper along with the baseline feature set, namely, Constant Q Cepstral Coefficients (CQCC) and Mel Frequency Cepstral Coefficients (MFCC). We found out that the score-level fusion of magnitude and phase-based features are giving better performance than the individual feature sets alone on the ASV Spoof 2017 Challenge version 2. In particular, the Equal Error Rate (EER) is 12.58 % on the evaluation set with the fusion of RPS and the CQCC feature sets using Gaussian Mixture Model (GMM) classifier.
重放欺骗试图通过记录真实的话语来欺骗自动说话人验证(ASV)系统。大多数研究都使用了基于震级的特征,而忽略了基于相位的特征进行重播检测。然而,由于记录过程中的环境特征,基于相位的特征也会受到影响。因此,本文使用了基于相位的特征,如参数化相对相移(RPS)和修正群延迟,以及基线特征集,即恒定Q倒谱系数(CQCC)和Mel频率倒谱系数(MFCC)。我们发现,在2017年ASV恶搞挑战第2版中,量级和基于相位的特征的分数级融合比单独的单个特征集提供了更好的性能。其中,使用高斯混合模型(GMM)分类器对RPS和CQCC特征集进行融合的评价集的等错误率(EER)为12.58%。
{"title":"Relative Phase Shift Features for Replay Spoof Detection System","authors":"Srinivas Kantheti, H. Patil","doi":"10.21437/SLTU.2018-21","DOIUrl":"https://doi.org/10.21437/SLTU.2018-21","url":null,"abstract":"The replay spoofing tries to fool the Automatic Speaker Verification (ASV) system by the recordings of a genuine utterance. Most of the studies have used magnitude-based features and ignored phase-based features for replay detection. However, the phase-based features also affected due to the environmental characteristics during recording. Hence, the phase-based features, such as parameterized Relative Phase Shift (RPS) and Modified Group Delay are used in this paper along with the baseline feature set, namely, Constant Q Cepstral Coefficients (CQCC) and Mel Frequency Cepstral Coefficients (MFCC). We found out that the score-level fusion of magnitude and phase-based features are giving better performance than the individual feature sets alone on the ASV Spoof 2017 Challenge version 2. In particular, the Equal Error Rate (EER) is 12.58 % on the evaluation set with the fusion of RPS and the CQCC feature sets using Gaussian Mixture Model (GMM) classifier.","PeriodicalId":190269,"journal":{"name":"Workshop on Spoken Language Technologies for Under-resourced Languages","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126382304","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
The Intonation System of Tajik: Is it Identical to Persian? 塔吉克语的语调系统:与波斯语相同吗?
Pub Date : 2018-08-29 DOI: 10.21437/SLTU.2018-54
Marina Agafonova
{"title":"The Intonation System of Tajik: Is it Identical to Persian?","authors":"Marina Agafonova","doi":"10.21437/SLTU.2018-54","DOIUrl":"https://doi.org/10.21437/SLTU.2018-54","url":null,"abstract":"","PeriodicalId":190269,"journal":{"name":"Workshop on Spoken Language Technologies for Under-resourced Languages","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122776472","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Workshop on Spoken Language Technologies for Under-resourced Languages
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1