Daylong recordings may be the most naturalistic and least invasive way to collect speech data, sampling all potential language use contexts, with a device that is unobtrusive enough to have little effect on people’s behaviors. As a result, this technology is relevant for studying diverse languages, including understudied languages in remote settings – provided we can apply effective unsupervised analyses procedures. In this paper, we analyze in detail results from applying an open source package (DiViMe) and a proprietary alternative (LENA ), onto clips periodically sampled from daylong recorders worn by Tsimane children of the Bolivian Amazon (age range: 6-68 months; recording time/child range: 4-22h). Detailed analyses showed the open source package fared no worse than the proprietary alternative. However, performance was overall rather dismal. We suggest promising directions for improvements based on analyses of variation in performance within our corpus.
{"title":"Diarization in Maximally Ecological Recordings: Data from Tsimane Children","authors":"Julien Karadayi, Camila Scaff, Alejandrina Cristia","doi":"10.21437/SLTU.2018-7","DOIUrl":"https://doi.org/10.21437/SLTU.2018-7","url":null,"abstract":"Daylong recordings may be the most naturalistic and least invasive way to collect speech data, sampling all potential language use contexts, with a device that is unobtrusive enough to have little effect on people’s behaviors. As a result, this technology is relevant for studying diverse languages, including understudied languages in remote settings – provided we can apply effective unsupervised analyses procedures. In this paper, we analyze in detail results from applying an open source package (DiViMe) and a proprietary alternative (LENA ), onto clips periodically sampled from daylong recorders worn by Tsimane children of the Bolivian Amazon (age range: 6-68 months; recording time/child range: 4-22h). Detailed analyses showed the open source package fared no worse than the proprietary alternative. However, performance was overall rather dismal. We suggest promising directions for improvements based on analyses of variation in performance within our corpus.","PeriodicalId":190269,"journal":{"name":"Workshop on Spoken Language Technologies for Under-resourced Languages","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122732112","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Robert Jimerson, Kruthika Simha, R. Ptucha, Emily Tucker Prud'hommeaux
Documenting endangered languages supports the historical preservation of diverse cultures. Automatic speech recognition (ASR), while potentially very useful for this task, has been underutilized for language documentation due to the challenges inherent in building robust models from extremely limited audio and text training resources. In this paper, we explore the utility of supplementing existing training resources using synthetic data, with a focus on Seneca, a morphologically complex endangered language of North America. We use transfer learning to train acoustic models using both the small amount of available acoustic training data and artificially distorted copies of that data. We then supplement the language model training data with verb forms generated by rule and sentences produced by an LSTM trained on the available text data. The addition of synthetic data yields reductions in word error rate, demonstrating the promise of data augmentation for this task.
{"title":"Improving ASR Output for Endangered Language Documentation","authors":"Robert Jimerson, Kruthika Simha, R. Ptucha, Emily Tucker Prud'hommeaux","doi":"10.21437/SLTU.2018-39","DOIUrl":"https://doi.org/10.21437/SLTU.2018-39","url":null,"abstract":"Documenting endangered languages supports the historical preservation of diverse cultures. Automatic speech recognition (ASR), while potentially very useful for this task, has been underutilized for language documentation due to the challenges inherent in building robust models from extremely limited audio and text training resources. In this paper, we explore the utility of supplementing existing training resources using synthetic data, with a focus on Seneca, a morphologically complex endangered language of North America. We use transfer learning to train acoustic models using both the small amount of available acoustic training data and artificially distorted copies of that data. We then supplement the language model training data with verb forms generated by rule and sentences produced by an LSTM trained on the available text data. The addition of synthetic data yields reductions in word error rate, demonstrating the promise of data augmentation for this task.","PeriodicalId":190269,"journal":{"name":"Workshop on Spoken Language Technologies for Under-resourced Languages","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132950836","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Keshan Sanjaya Sodimana, Pasindu De Silva, Supheakmungkol Sarin, Oddur Kjartansson, Martin Jansche, Knot Pipatsrisawat, Linne Ha
The availability of language resources is vital for the development of text-to-speech (TTS) systems. Thus, open source resources are highly beneficial for TTS research communities focused on low-resourced languages. In this paper, we present data sets for 6 low-resourced languages that we open sourced to the public. The data sets consist of audio files, pronunciation lexicons, and phonology definitions for Bangla, Javanese, Khmer, Nepali, Sinhala, and Sundanese. These data sets are sufficient for building voices in these languages. We also describe a recipe for building a new TTS voice using our data together with openly available resources and tools.
{"title":"A Step-by-Step Process for Building TTS Voices Using Open Source Data and Frameworks for Bangla, Javanese, Khmer, Nepali, Sinhala, and Sundanese","authors":"Keshan Sanjaya Sodimana, Pasindu De Silva, Supheakmungkol Sarin, Oddur Kjartansson, Martin Jansche, Knot Pipatsrisawat, Linne Ha","doi":"10.21437/SLTU.2018-14","DOIUrl":"https://doi.org/10.21437/SLTU.2018-14","url":null,"abstract":"The availability of language resources is vital for the development of text-to-speech (TTS) systems. Thus, open source resources are highly beneficial for TTS research communities focused on low-resourced languages. In this paper, we present data sets for 6 low-resourced languages that we open sourced to the public. The data sets consist of audio files, pronunciation lexicons, and phonology definitions for Bangla, Javanese, Khmer, Nepali, Sinhala, and Sundanese. These data sets are sufficient for building voices in these languages. We also describe a recipe for building a new TTS voice using our data together with openly available resources and tools.","PeriodicalId":190269,"journal":{"name":"Workshop on Spoken Language Technologies for Under-resourced Languages","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122705496","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ben Foley, Joshua T. Arnold, Rolando Coto-Solano, Gautier Durantin, T. M. Ellison, D. Esch, Scott Heath, Frantisek Kratochvíl, Zara Maxwell-Smith, David Nash, Ola Olsson, Mark Richards, Nay San, H. Stoakes, N. Thieberger, Janet Wiles
{"title":"Building Speech Recognition Systems for Language Documentation: The CoEDL Endangered Language Pipeline and Inference System (ELPIS)","authors":"Ben Foley, Joshua T. Arnold, Rolando Coto-Solano, Gautier Durantin, T. M. Ellison, D. Esch, Scott Heath, Frantisek Kratochvíl, Zara Maxwell-Smith, David Nash, Ola Olsson, Mark Richards, Nay San, H. Stoakes, N. Thieberger, Janet Wiles","doi":"10.21437/SLTU.2018-43","DOIUrl":"https://doi.org/10.21437/SLTU.2018-43","url":null,"abstract":"","PeriodicalId":190269,"journal":{"name":"Workshop on Spoken Language Technologies for Under-resourced Languages","volume":"104 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127820332","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper presents a new baseline for Malay-English code-switched speech corpus; which is constructed using a factored form of time delay neural networks (TDNN-F), which reflected a significant relative percentage reduction of 28.07% in the word-error rate (WER), as compared to the Gaussian Mixture Model-Hidden Markov Model (GMM-HMM). The presented results also confirm the effectiveness of time delay neural networks (TDNNs) for code-switched speech.
{"title":"Evaluating Code-Switched Malay-English Speech Using Time Delay Neural Networks.","authors":"Anand Singh, Tien-Ping Tan","doi":"10.21437/SLTU.2018-40","DOIUrl":"https://doi.org/10.21437/SLTU.2018-40","url":null,"abstract":"This paper presents a new baseline for Malay-English code-switched speech corpus; which is constructed using a factored form of time delay neural networks (TDNN-F), which reflected a significant relative percentage reduction of 28.07% in the word-error rate (WER), as compared to the Gaussian Mixture Model-Hidden Markov Model (GMM-HMM). The presented results also confirm the effectiveness of time delay neural networks (TDNNs) for code-switched speech.","PeriodicalId":190269,"journal":{"name":"Workshop on Spoken Language Technologies for Under-resourced Languages","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115173815","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Roop Bajracharya, S. Regmi, B. Bal, Balaram Prasain
Text-to-Speech (TTS) synthesis has come far from its primitive synthetic monotone voices to more natural and intelligible sounding voices. One of the direct applications of a natural sounding TTS systems is the screen reader applications for the visually impaired and the blind community. The Festival Speech Synthesis System uses a concatenative speech synthesis method together with the unit selection process to generate a natural sounding voice. This work primarily gives an account of the efforts put towards developing a Natural sounding TTS system for Nepali using the Festival system. We also shed light on the issues faced and the solutions derived which can be quite overlapping across other similar under-resourced languages in the region.
{"title":"Building a Natural Sounding Text-to-Speech System for the Nepali Language - Research and Development Challenges and Solutions","authors":"Roop Bajracharya, S. Regmi, B. Bal, Balaram Prasain","doi":"10.21437/SLTU.2018-32","DOIUrl":"https://doi.org/10.21437/SLTU.2018-32","url":null,"abstract":"Text-to-Speech (TTS) synthesis has come far from its primitive synthetic monotone voices to more natural and intelligible sounding voices. One of the direct applications of a natural sounding TTS systems is the screen reader applications for the visually impaired and the blind community. The Festival Speech Synthesis System uses a concatenative speech synthesis method together with the unit selection process to generate a natural sounding voice. This work primarily gives an account of the efforts put towards developing a Natural sounding TTS system for Nepali using the Festival system. We also shed light on the issues faced and the solutions derived which can be quite overlapping across other similar under-resourced languages in the region.","PeriodicalId":190269,"journal":{"name":"Workshop on Spoken Language Technologies for Under-resourced Languages","volume":"59 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114612157","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Khumaisa Nur'Aini, Johanes Effendi, S. Sakti, M. Adriani, Satoshi Nakamura
Understanding language grounded in visual content is a challenging problem that has raised interest in both the computer vision and natural language processing communities. Flickr30k, which is one of the corpora that have become a standard benchmark to study sentence-based image description, was initially limited to English descriptions, but it has been extended to German, French, and Czech. This paper describes our construction of an image description dataset in the Indonesian language. We translated English descriptions from the Flickr30K dataset into Indonesian with automatic machine translation and performed human validation for the portion of the result. We then constructed Indonesian image descriptions of 10k images by crowdsourcing without English descriptions or translations, and found semantic differences between translations and descriptions. We conclude that the cultural differences between the native speakers of English and Indonesian create different perceptions for constructing natural language expressions that describe an image.
{"title":"Corpus Construction and Semantic Analysis of Indonesian Image Description","authors":"Khumaisa Nur'Aini, Johanes Effendi, S. Sakti, M. Adriani, Satoshi Nakamura","doi":"10.21437/SLTU.2018-9","DOIUrl":"https://doi.org/10.21437/SLTU.2018-9","url":null,"abstract":"Understanding language grounded in visual content is a challenging problem that has raised interest in both the computer vision and natural language processing communities. Flickr30k, which is one of the corpora that have become a standard benchmark to study sentence-based image description, was initially limited to English descriptions, but it has been extended to German, French, and Czech. This paper describes our construction of an image description dataset in the Indonesian language. We translated English descriptions from the Flickr30K dataset into Indonesian with automatic machine translation and performed human validation for the portion of the result. We then constructed Indonesian image descriptions of 10k images by crowdsourcing without English descriptions or translations, and found semantic differences between translations and descriptions. We conclude that the cultural differences between the native speakers of English and Indonesian create different perceptions for constructing natural language expressions that describe an image.","PeriodicalId":190269,"journal":{"name":"Workshop on Spoken Language Technologies for Under-resourced Languages","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114874452","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper presents the design and development of IIITH Hindi-English code mixed (IIITH-HE-CM) text and corresponding speech corpus. The corpus is collected from several Hindi native speakers from different geographical parts of India. The IIITH-HE-CM corpus has phonetically balanced code mixed sentences with all the phoneme coverage of Hindi and English languages. We used triphone frequency of word internal triphone sequence, consists the language specific information, which helps in code mixed speech recognition and language modelling. The code mixed sentences are written in Devanagari script. Since computers can recognize Roman symbols, we used Indian Language Speech Sound Label (ILSL) transcription. An acoustic model is built for Hindi-English mixed language in-stead of language-dependent models. A large vocabulary code-mixing speech recognition system is developed based on a deep neural network (DNN) architecture. The proposed code-mixed speech recognition system attains low word error rate (WER) compared to conventional system.
本文介绍了IIITH印英混合码(IIITH- he - cm)文本和相应语音语料库的设计与开发。该语料库收集了来自印度不同地理区域的几个印地语母语人士。IIITH-HE-CM语料库具有语音平衡的代码混合句子,具有印地语和英语语言的所有音素覆盖。我们使用三音频率的词内部三音序列,组成语言的特定信息,这有助于代码混合语音识别和语言建模。代码混合语句是用梵文书写的。由于计算机可以识别罗马符号,我们使用了印度语言语音标签(ILSL)转录。在此基础上,建立了一种针对印英混合语言的声学模型,而不是基于语言的模型。提出了一种基于深度神经网络(DNN)的大词汇量混码语音识别系统。与传统语音识别系统相比,该系统具有较低的单词错误率。
{"title":"Development of IIITH Hindi-English Code Mixed Speech Database","authors":"B. Rambabu, S. Gangashetty","doi":"10.21437/SLTU.2018-23","DOIUrl":"https://doi.org/10.21437/SLTU.2018-23","url":null,"abstract":"This paper presents the design and development of IIITH Hindi-English code mixed (IIITH-HE-CM) text and corresponding speech corpus. The corpus is collected from several Hindi native speakers from different geographical parts of India. The IIITH-HE-CM corpus has phonetically balanced code mixed sentences with all the phoneme coverage of Hindi and English languages. We used triphone frequency of word internal triphone sequence, consists the language specific information, which helps in code mixed speech recognition and language modelling. The code mixed sentences are written in Devanagari script. Since computers can recognize Roman symbols, we used Indian Language Speech Sound Label (ILSL) transcription. An acoustic model is built for Hindi-English mixed language in-stead of language-dependent models. A large vocabulary code-mixing speech recognition system is developed based on a deep neural network (DNN) architecture. The proposed code-mixed speech recognition system attains low word error rate (WER) compared to conventional system.","PeriodicalId":190269,"journal":{"name":"Workshop on Spoken Language Technologies for Under-resourced Languages","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127206674","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The replay spoofing tries to fool the Automatic Speaker Verification (ASV) system by the recordings of a genuine utterance. Most of the studies have used magnitude-based features and ignored phase-based features for replay detection. However, the phase-based features also affected due to the environmental characteristics during recording. Hence, the phase-based features, such as parameterized Relative Phase Shift (RPS) and Modified Group Delay are used in this paper along with the baseline feature set, namely, Constant Q Cepstral Coefficients (CQCC) and Mel Frequency Cepstral Coefficients (MFCC). We found out that the score-level fusion of magnitude and phase-based features are giving better performance than the individual feature sets alone on the ASV Spoof 2017 Challenge version 2. In particular, the Equal Error Rate (EER) is 12.58 % on the evaluation set with the fusion of RPS and the CQCC feature sets using Gaussian Mixture Model (GMM) classifier.
{"title":"Relative Phase Shift Features for Replay Spoof Detection System","authors":"Srinivas Kantheti, H. Patil","doi":"10.21437/SLTU.2018-21","DOIUrl":"https://doi.org/10.21437/SLTU.2018-21","url":null,"abstract":"The replay spoofing tries to fool the Automatic Speaker Verification (ASV) system by the recordings of a genuine utterance. Most of the studies have used magnitude-based features and ignored phase-based features for replay detection. However, the phase-based features also affected due to the environmental characteristics during recording. Hence, the phase-based features, such as parameterized Relative Phase Shift (RPS) and Modified Group Delay are used in this paper along with the baseline feature set, namely, Constant Q Cepstral Coefficients (CQCC) and Mel Frequency Cepstral Coefficients (MFCC). We found out that the score-level fusion of magnitude and phase-based features are giving better performance than the individual feature sets alone on the ASV Spoof 2017 Challenge version 2. In particular, the Equal Error Rate (EER) is 12.58 % on the evaluation set with the fusion of RPS and the CQCC feature sets using Gaussian Mixture Model (GMM) classifier.","PeriodicalId":190269,"journal":{"name":"Workshop on Spoken Language Technologies for Under-resourced Languages","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126382304","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The Intonation System of Tajik: Is it Identical to Persian?","authors":"Marina Agafonova","doi":"10.21437/SLTU.2018-54","DOIUrl":"https://doi.org/10.21437/SLTU.2018-54","url":null,"abstract":"","PeriodicalId":190269,"journal":{"name":"Workshop on Spoken Language Technologies for Under-resourced Languages","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122776472","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}