Kurdish poetry and prose narratives were historically transmitted orally and less in a written form. Being an essential medium of oral narration and literature, Kurdish lyrics have had a unique attribute in becoming a vital resource for different types of studies, including Digital Humanities, Computational Folkloristics and Computational Linguistics. As an initial study of its kind for the Kurdish language, this paper presents our efforts in transcribing and collecting Kurdish folk lyrics as a corpus that covers various Kurdish musical genres, in particular Beyt, Gorani, Bend, and Heyran. We believe that this corpus contributes to Kurdish language processing in several ways, such as compensation for the lack of a long history of written text by incorporating oral literature, presenting an unexplored realm in Kurdish language processing, and assisting the initiation of Kurdish computational folkloristics. Our corpus contains 49,582 tokens in the Sorani dialect of Kurdish. The corpus is publicly available in the Text Encoding Initiative (TEI) format for non-commercial use.
{"title":"A Corpus of the Sorani Kurdish Folkloric Lyrics","authors":"Sina Ahmadi, Hossein Hassani, K. Abedi","doi":"10.13025/1YDH-EW61","DOIUrl":"https://doi.org/10.13025/1YDH-EW61","url":null,"abstract":"Kurdish poetry and prose narratives were historically transmitted orally and less in a written form. Being an essential medium of oral narration and literature, Kurdish lyrics have had a unique attribute in becoming a vital resource for different types of studies, including Digital Humanities, Computational Folkloristics and Computational Linguistics. As an initial study of its kind for the Kurdish language, this paper presents our efforts in transcribing and collecting Kurdish folk lyrics as a corpus that covers various Kurdish musical genres, in particular Beyt, Gorani, Bend, and Heyran. We believe that this corpus contributes to Kurdish language processing in several ways, such as compensation for the lack of a long history of written text by incorporating oral literature, presenting an unexplored realm in Kurdish language processing, and assisting the initiation of Kurdish computational folkloristics. Our corpus contains 49,582 tokens in the Sorani dialect of Kurdish. The corpus is publicly available in the Text Encoding Initiative (TEI) format for non-commercial use.","PeriodicalId":190269,"journal":{"name":"Workshop on Spoken Language Technologies for Under-resourced Languages","volume":"331 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-05-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133433160","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Bharathi Raja Chakravarthi, Navya Jose, Shardul Suryawanshi, E. Sherly, John P. McCrae
There is an increasing demand for sentiment analysis of text from social media which are mostly code-mixed. Systems trained on monolingual data fail for code-mixed data due to the complexity of mixing at different levels of the text. However, very few resources are available for code-mixed data to create models specific for this data. Although much research in multilingual and cross-lingual sentiment analysis has used semi-supervised or unsupervised methods, supervised methods still performs better. Only a few datasets for popular languages such as English-Spanish, English-Hindi, and English-Chinese are available. There are no resources available for Malayalam-English code-mixed data. This paper presents a new gold standard corpus for sentiment analysis of code-mixed text in Malayalam-English annotated by voluntary annotators. This gold standard corpus obtained a Krippendorff’s alpha above 0.8 for the dataset. We use this new corpus to provide the benchmark for sentiment analysis in Malayalam-English code-mixed texts.
{"title":"A Sentiment Analysis Dataset for Code-Mixed Malayalam-English","authors":"Bharathi Raja Chakravarthi, Navya Jose, Shardul Suryawanshi, E. Sherly, John P. McCrae","doi":"10.5281/ZENODO.4015234","DOIUrl":"https://doi.org/10.5281/ZENODO.4015234","url":null,"abstract":"There is an increasing demand for sentiment analysis of text from social media which are mostly code-mixed. Systems trained on monolingual data fail for code-mixed data due to the complexity of mixing at different levels of the text. However, very few resources are available for code-mixed data to create models specific for this data. Although much research in multilingual and cross-lingual sentiment analysis has used semi-supervised or unsupervised methods, supervised methods still performs better. Only a few datasets for popular languages such as English-Spanish, English-Hindi, and English-Chinese are available. There are no resources available for Malayalam-English code-mixed data. This paper presents a new gold standard corpus for sentiment analysis of code-mixed text in Malayalam-English annotated by voluntary annotators. This gold standard corpus obtained a Krippendorff’s alpha above 0.8 for the dataset. We use this new corpus to provide the benchmark for sentiment analysis in Malayalam-English code-mixed texts.","PeriodicalId":190269,"journal":{"name":"Workshop on Spoken Language Technologies for Under-resourced Languages","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-05-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116413784","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Bharathi Raja Chakravarthi, V. Muralidaran, R. Priyadharshini, John P. McCrae
Understanding the sentiment of a comment from a video or an image is an essential task in many applications. Sentiment analysis of a text can be useful for various decision-making processes. One such application is to analyse the popular sentiments of videos on social media based on viewer comments. However, comments from social media do not follow strict rules of grammar, and they contain mixing of more than one language, often written in non-native scripts. Non-availability of annotated code-mixed data for a low-resourced language like Tamil also adds difficulty to this problem. To overcome this, we created a gold standard Tamil-English code-switched, sentiment-annotated corpus containing 15,744 comment posts from YouTube. In this paper, we describe the process of creating the corpus and assigning polarities. We present inter-annotator agreement and show the results of sentiment analysis trained on this corpus as a benchmark.
{"title":"Corpus Creation for Sentiment Analysis in Code-Mixed Tamil-English Text","authors":"Bharathi Raja Chakravarthi, V. Muralidaran, R. Priyadharshini, John P. McCrae","doi":"10.5281/ZENODO.4015253","DOIUrl":"https://doi.org/10.5281/ZENODO.4015253","url":null,"abstract":"Understanding the sentiment of a comment from a video or an image is an essential task in many applications. Sentiment analysis of a text can be useful for various decision-making processes. One such application is to analyse the popular sentiments of videos on social media based on viewer comments. However, comments from social media do not follow strict rules of grammar, and they contain mixing of more than one language, often written in non-native scripts. Non-availability of annotated code-mixed data for a low-resourced language like Tamil also adds difficulty to this problem. To overcome this, we created a gold standard Tamil-English code-switched, sentiment-annotated corpus containing 15,744 comment posts from YouTube. In this paper, we describe the process of creating the corpus and assigning polarities. We present inter-annotator agreement and show the results of sentiment analysis trained on this corpus as a benchmark.","PeriodicalId":190269,"journal":{"name":"Workshop on Spoken Language Technologies for Under-resourced Languages","volume":"85 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-05-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123509723","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this study, the acoustic-phonetic attributes of palatalization in the Kashmiri speech is investigated. It is a unique phonetic feature of Kashmiri in the Indian context. An automated approach is proposed to detect this unique phonetic feature from the continuous Kashmiri speech. The i-matra vowel has the impact of palatalizing the consonant connected to it. Therefore, these consonants investigated in synchronous with vowel regions, which are spotted using the instantaneous energy computed from the envelope-derivative of the speech signal. The resonating characteristics of the vocal-tract system framework that reflect the formant dynamics are used to differentiate palatalized consonants from the other consonants. In this regard, the Hilbert envelope of the numerator of the group-delay function that provides good time-frequency resolution used to extract formants. The palatalization detection experimentation carried out in various vowel contexts using the acoustic cues, and it produced a promising result with a detection accuracy of 92.46 %.
{"title":"Automatic Detection of Palatalized Consonants in Kashmiri","authors":"Ramakrishna Thirumuru, K. Gurugubelli, A. Vuppala","doi":"10.21437/SLTU.2018-25","DOIUrl":"https://doi.org/10.21437/SLTU.2018-25","url":null,"abstract":"In this study, the acoustic-phonetic attributes of palatalization in the Kashmiri speech is investigated. It is a unique phonetic feature of Kashmiri in the Indian context. An automated approach is proposed to detect this unique phonetic feature from the continuous Kashmiri speech. The i-matra vowel has the impact of palatalizing the consonant connected to it. Therefore, these consonants investigated in synchronous with vowel regions, which are spotted using the instantaneous energy computed from the envelope-derivative of the speech signal. The resonating characteristics of the vocal-tract system framework that reflect the formant dynamics are used to differentiate palatalized consonants from the other consonants. In this regard, the Hilbert envelope of the numerator of the group-delay function that provides good time-frequency resolution used to extract formants. The palatalization detection experimentation carried out in various vowel contexts using the acoustic cues, and it produced a promising result with a detection accuracy of 92.46 %.","PeriodicalId":190269,"journal":{"name":"Workshop on Spoken Language Technologies for Under-resourced Languages","volume":"269 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115209418","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Keshan Sanjaya Sodimana, Pasindu De Silva, R. Sproat, T. Wattanavekin, Alexander Gutkin, Knot Pipatsrisawat
Text normalization is the process of converting non-standard words (NSWs) such as numbers, and abbreviations into standard words so that their pronunciations can be derived by a typical means (usually lexicon lookups). Text normalization is, thus, an important component of any text-to-speech (TTS) system. Without text normalization, the resulting voice may sound unintelligent. In this paper, we describe an approach to develop rule-based text normalization. We also describe our open source repository containing text normalization grammars and tests for Bangla, Javanese, Khmer, Nepali, Sinhala and Sundanese. Fi-nally, we present a recipe for utilizing the grammars in a TTS system.
{"title":"Text Normalization for Bangla, Khmer, Nepali, Javanese, Sinhala and Sundanese Text-to-Speech Systems","authors":"Keshan Sanjaya Sodimana, Pasindu De Silva, R. Sproat, T. Wattanavekin, Alexander Gutkin, Knot Pipatsrisawat","doi":"10.21437/SLTU.2018-31","DOIUrl":"https://doi.org/10.21437/SLTU.2018-31","url":null,"abstract":"Text normalization is the process of converting non-standard words (NSWs) such as numbers, and abbreviations into standard words so that their pronunciations can be derived by a typical means (usually lexicon lookups). Text normalization is, thus, an important component of any text-to-speech (TTS) system. Without text normalization, the resulting voice may sound unintelligent. In this paper, we describe an approach to develop rule-based text normalization. We also describe our open source repository containing text normalization grammars and tests for Bangla, Javanese, Khmer, Nepali, Sinhala and Sundanese. Fi-nally, we present a recipe for utilizing the grammars in a TTS system.","PeriodicalId":190269,"journal":{"name":"Workshop on Spoken Language Technologies for Under-resourced Languages","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114286624","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Uyghur is a highly agglutinative language with a large number of words derived from the same root. For such languages the use of subwords in speech recognition becomes a natural choice, which can solve the OOV issues. However, short units in subword modeling will weaken the constraint of linguistic context. Besides, vowel weakening and reduction occur frequently in Uyghur language, which may lead to high deletion errors for short unit sequence recognition. In this paper, we investigate using mixed units in Uyghur speech recognition. Subwords and whole-words are mixed together to build a hybrid lexicon and language models for recognition. We also introduce an interpolated LM to further improve the performance. Experiment results show that the mixed-unit based modeling do outperform word or subword based modeling. About 10% relative reduction in Word Error Rate and 8% reduction in Character Error Rate have been achieved for test datasets compared with baseline system.
{"title":"Investigating the Use of Mixed-Units Based Modeling for Improving Uyghur Speech Recognition","authors":"Pengfei Hu, Shen Huang, Zhiqiang Lv","doi":"10.21437/SLTU.2018-45","DOIUrl":"https://doi.org/10.21437/SLTU.2018-45","url":null,"abstract":"Uyghur is a highly agglutinative language with a large number of words derived from the same root. For such languages the use of subwords in speech recognition becomes a natural choice, which can solve the OOV issues. However, short units in subword modeling will weaken the constraint of linguistic context. Besides, vowel weakening and reduction occur frequently in Uyghur language, which may lead to high deletion errors for short unit sequence recognition. In this paper, we investigate using mixed units in Uyghur speech recognition. Subwords and whole-words are mixed together to build a hybrid lexicon and language models for recognition. We also introduce an interpolated LM to further improve the performance. Experiment results show that the mixed-unit based modeling do outperform word or subword based modeling. About 10% relative reduction in Word Error Rate and 8% reduction in Character Error Rate have been achieved for test datasets compared with baseline system.","PeriodicalId":190269,"journal":{"name":"Workshop on Spoken Language Technologies for Under-resourced Languages","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133056426","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Joyshree Chakraborty, Shikhamoni Nath, R. NirmalaS., K. Samudravijaya
Machine identification of the language of input speech is of practical interest in regions where people are either bilingual or multi-lingual. Here, we present the development of automatic language identification system that identifies the language of input speech as one of Assamese or Bengali or English spoken by them. The speech databases comprise of sentences read by multiple speakers using their mobile phones. Kaldi toolkit was used to train acoustic models based on hidden Markov model in conjunction with Gaussian mixture models and deep neural networks. The accuracy of the implemented language identification system for test data is 99.3%.
{"title":"Language Identification of Assamese, Bengali and English Speech","authors":"Joyshree Chakraborty, Shikhamoni Nath, R. NirmalaS., K. Samudravijaya","doi":"10.21437/SLTU.2018-37","DOIUrl":"https://doi.org/10.21437/SLTU.2018-37","url":null,"abstract":"Machine identification of the language of input speech is of practical interest in regions where people are either bilingual or multi-lingual. Here, we present the development of automatic language identification system that identifies the language of input speech as one of Assamese or Bengali or English spoken by them. The speech databases comprise of sentences read by multiple speakers using their mobile phones. Kaldi toolkit was used to train acoustic models based on hidden Markov model in conjunction with Gaussian mixture models and deep neural networks. The accuracy of the implemented language identification system for test data is 99.3%.","PeriodicalId":190269,"journal":{"name":"Workshop on Spoken Language Technologies for Under-resourced Languages","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127060256","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ao is an under-resourced Tibeto-Burman tone language spoken in Nagaland, India, with three lexical tones, namely, high, mid and low. There are three dialects of the language namely, Chungli, Mongsen and Changki, differing in tone assignment in lexical words. This work investigates if the idiosyncratic tone assignment in the Ao dialects can be utilized for dialect identification of two Ao dialects, namely, Changki and Mongsen. A perception test confirmed that Ao speakers identified the two dialects based on their dialect-specific tone assignment. To confirm that tone is the primary cue in dialect identification, F0 was neutralized in the speech data before subjecting them to a Gaussian Mixture Model (GMM) based dialect identification system. The low dialect recognition accuracy confirmed the significance of tones in Ao dialect identification. Finally, a GMM-based dialect identification system was built with tonal and spectral features, resulting in better dialect recognition accuracy.
{"title":"Dialect Identification Using Tonal and Spectral Features in Two Dialects of Ao","authors":"Moakala Tzudir, Priyankoo Sarmah, S. Prasanna","doi":"10.21437/SLTU.2018-29","DOIUrl":"https://doi.org/10.21437/SLTU.2018-29","url":null,"abstract":"Ao is an under-resourced Tibeto-Burman tone language spoken in Nagaland, India, with three lexical tones, namely, high, mid and low. There are three dialects of the language namely, Chungli, Mongsen and Changki, differing in tone assignment in lexical words. This work investigates if the idiosyncratic tone assignment in the Ao dialects can be utilized for dialect identification of two Ao dialects, namely, Changki and Mongsen. A perception test confirmed that Ao speakers identified the two dialects based on their dialect-specific tone assignment. To confirm that tone is the primary cue in dialect identification, F0 was neutralized in the speech data before subjecting them to a Gaussian Mixture Model (GMM) based dialect identification system. The low dialect recognition accuracy confirmed the significance of tones in Ao dialect identification. Finally, a GMM-based dialect identification system was built with tonal and spectral features, resulting in better dialect recognition accuracy.","PeriodicalId":190269,"journal":{"name":"Workshop on Spoken Language Technologies for Under-resourced Languages","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122036903","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Thilini Nadungodage, Chamila Liyanage, Amathri Prerera, Randil Pushpananda, R. Weerasinghe
Grapheme-to-phoneme (G2P) conversion plays an important role in speech processing applications and other fields of computational linguistics. Sinhala must have a grapheme-to-phoneme conversion for speech processing because Sinhala writing system does not always reflect its actual pronunciations. This paper describes a rule basedG2P conversion method to convert Sinhala text strings into phonemic representations. We use a previously defined rule set and enhance it to get a more accurate G2P conversion. The performance of our rule-based system shows that the rulebased sound patterns are effective on Sinhala G2P conversion.
{"title":"Sinhala G2P Conversion for Speech Processing","authors":"Thilini Nadungodage, Chamila Liyanage, Amathri Prerera, Randil Pushpananda, R. Weerasinghe","doi":"10.21437/SLTU.2018-24","DOIUrl":"https://doi.org/10.21437/SLTU.2018-24","url":null,"abstract":"Grapheme-to-phoneme (G2P) conversion plays an important role in speech processing applications and other fields of computational linguistics. Sinhala must have a grapheme-to-phoneme conversion for speech processing because Sinhala writing system does not always reflect its actual pronunciations. This paper describes a rule basedG2P conversion method to convert Sinhala text strings into phonemic representations. We use a previously defined rule set and enhance it to get a more accurate G2P conversion. The performance of our rule-based system shows that the rulebased sound patterns are effective on Sinhala G2P conversion.","PeriodicalId":190269,"journal":{"name":"Workshop on Spoken Language Technologies for Under-resourced Languages","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130253828","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Acoustic Characretistics of Schwa Vowel in Punjabi","authors":"Swaran Lata, Prashant Verma, S. Kaur","doi":"10.21437/SLTU.2018-18","DOIUrl":"https://doi.org/10.21437/SLTU.2018-18","url":null,"abstract":"","PeriodicalId":190269,"journal":{"name":"Workshop on Spoken Language Technologies for Under-resourced Languages","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130159499","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}