Pub Date : 2021-10-13DOI: 10.1109/sped53181.2021.9587372
M. Soleymanpour, Michael T. Johnson, J. Berry
End-to-end speech recognition systems are effective, but in order to train an end-to-end model, a large amount of training data is needed. For applications such as dysarthric speech recognition, we do not have sufficient data. In this paper, we propose a specialized data augmentation approach to enhance the performance of an end-to-end dysarthric ASR based on sub-word models. The proposed approach contains two methods, including prosodic transformation and time-feature masking. Prosodic transformation modifies the speaking rate and pitch of normal speech to control prosodic characteristics such as loudness, intonation, and rhythm. Using time and feature masking, we apply a mask to the Mel Frequency Cepstral Coefficients (MFCC) for robustness-focused augmentation. Results show that augmenting normal speech with prosodic transformation plus masking decreases CER by 5.4% and WER by 5.6%, and the further addition of dysarthric speech masking decreases CER by 11.3% and WER by 11.4%.
{"title":"Dysarthric Speech Augmentation Using Prosodic Transformation and Masking for Subword End-to-end ASR","authors":"M. Soleymanpour, Michael T. Johnson, J. Berry","doi":"10.1109/sped53181.2021.9587372","DOIUrl":"https://doi.org/10.1109/sped53181.2021.9587372","url":null,"abstract":"End-to-end speech recognition systems are effective, but in order to train an end-to-end model, a large amount of training data is needed. For applications such as dysarthric speech recognition, we do not have sufficient data. In this paper, we propose a specialized data augmentation approach to enhance the performance of an end-to-end dysarthric ASR based on sub-word models. The proposed approach contains two methods, including prosodic transformation and time-feature masking. Prosodic transformation modifies the speaking rate and pitch of normal speech to control prosodic characteristics such as loudness, intonation, and rhythm. Using time and feature masking, we apply a mask to the Mel Frequency Cepstral Coefficients (MFCC) for robustness-focused augmentation. Results show that augmenting normal speech with prosodic transformation plus masking decreases CER by 5.4% and WER by 5.6%, and the further addition of dysarthric speech masking decreases CER by 11.3% and WER by 11.4%.","PeriodicalId":193702,"journal":{"name":"2021 International Conference on Speech Technology and Human-Computer Dialogue (SpeD)","volume":"80 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133725974","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-10-13DOI: 10.1109/sped53181.2021.9587348
{"title":"[SpeD 2021 Front cover]","authors":"","doi":"10.1109/sped53181.2021.9587348","DOIUrl":"https://doi.org/10.1109/sped53181.2021.9587348","url":null,"abstract":"","PeriodicalId":193702,"journal":{"name":"2021 International Conference on Speech Technology and Human-Computer Dialogue (SpeD)","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132980884","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-10-13DOI: 10.1109/sped53181.2021.9587432
M. Popescu, C. Rusu, L. Grama
The aim of this paper is to present some results on word embeddings for the Romanian language, based on the word2vec method. More concretely, we generate word embeddings of different lengths, and using different preprocessing and training techniques. The embeddings are general purpose, and we use the Romanian language version of Wikipedia as corpus. We also evaluate the computational resources needed for the task. The embeddings are validated by performing some experiments on synonyms detection, using a new dataset created for this purpose. The code and the dataset are made publicly available. The results indicate that these types of embeddings can be used with the summarization approaches.
{"title":"Word Embeddings for Romanian Language and Their Use for Synonyms Detection","authors":"M. Popescu, C. Rusu, L. Grama","doi":"10.1109/sped53181.2021.9587432","DOIUrl":"https://doi.org/10.1109/sped53181.2021.9587432","url":null,"abstract":"The aim of this paper is to present some results on word embeddings for the Romanian language, based on the word2vec method. More concretely, we generate word embeddings of different lengths, and using different preprocessing and training techniques. The embeddings are general purpose, and we use the Romanian language version of Wikipedia as corpus. We also evaluate the computational resources needed for the task. The embeddings are validated by performing some experiments on synonyms detection, using a new dataset created for this purpose. The code and the dataset are made publicly available. The results indicate that these types of embeddings can be used with the summarization approaches.","PeriodicalId":193702,"journal":{"name":"2021 International Conference on Speech Technology and Human-Computer Dialogue (SpeD)","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122779573","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-10-13DOI: 10.1109/sped53181.2021.9587393
Gheorghe Pop, D. Burileanu
The latest decade has seen a huge wave of interest in the synthesis of human image and speech. Besides the enormous impact of synthetic voice in the communication between humans and machines, the production of the so-called “fake media” entered the focus of forensic audio and video communities. A large variety of techniques are now available to produce synthetic speech, from the traditional concatenative speech production to multi-million parameter speech and speaker models. Recent work in the field of artificial intelligence (AI) has shown some synthetic speech generators as capable to fool even state-of-the-art automatic speaker verification systems. AI seems to hold the key to successful speaker spoofing attacks, but also for their countermeasures. As a first step on the way, this paper describes a data-centric method to detect the use of synthetically generated spoken digits in the Romanian language.
{"title":"Towards Detection of Synthetic Utterances in Romanian Language Speech Forensics","authors":"Gheorghe Pop, D. Burileanu","doi":"10.1109/sped53181.2021.9587393","DOIUrl":"https://doi.org/10.1109/sped53181.2021.9587393","url":null,"abstract":"The latest decade has seen a huge wave of interest in the synthesis of human image and speech. Besides the enormous impact of synthetic voice in the communication between humans and machines, the production of the so-called “fake media” entered the focus of forensic audio and video communities. A large variety of techniques are now available to produce synthetic speech, from the traditional concatenative speech production to multi-million parameter speech and speaker models. Recent work in the field of artificial intelligence (AI) has shown some synthetic speech generators as capable to fool even state-of-the-art automatic speaker verification systems. AI seems to hold the key to successful speaker spoofing attacks, but also for their countermeasures. As a first step on the way, this paper describes a data-centric method to detect the use of synthetically generated spoken digits in the Romanian language.","PeriodicalId":193702,"journal":{"name":"2021 International Conference on Speech Technology and Human-Computer Dialogue (SpeD)","volume":"84 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121386863","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-10-13DOI: 10.1109/sped53181.2021.9587406
Ricardo Reimao, Vassilios Tzerpos
Computer generated speech has improved drastically due to advancements in voice synthesis using deep learning techniques. The latest speech synthesizers achieve such high level of naturalness that humans have difficulty distinguishing real speech from computer generated speech. These technologies allow any person to train a synthesizer with a target voice, creating a model that is able to reproduce someone’s voice with high fidelity. This technology can be used in several legit commercial applications (e.g. call centres) as well as criminal activities, such as the impersonation of someone’s voice.In this paper, we analyze how synthetic speech is generated and propose deep learning methodologies to detect such synthesized utterances. Using a large dataset containing both synthetic and real speech, we analyzed the performance of the latest deep learning models in the classification of such utterances. Our proposed model achieves up to 92.00% accuracy in detecting unseen synthetic speech, which is a significant improvement from human performance (65.7%).
{"title":"Synthetic Speech Detection Using Neural Networks","authors":"Ricardo Reimao, Vassilios Tzerpos","doi":"10.1109/sped53181.2021.9587406","DOIUrl":"https://doi.org/10.1109/sped53181.2021.9587406","url":null,"abstract":"Computer generated speech has improved drastically due to advancements in voice synthesis using deep learning techniques. The latest speech synthesizers achieve such high level of naturalness that humans have difficulty distinguishing real speech from computer generated speech. These technologies allow any person to train a synthesizer with a target voice, creating a model that is able to reproduce someone’s voice with high fidelity. This technology can be used in several legit commercial applications (e.g. call centres) as well as criminal activities, such as the impersonation of someone’s voice.In this paper, we analyze how synthetic speech is generated and propose deep learning methodologies to detect such synthesized utterances. Using a large dataset containing both synthetic and real speech, we analyzed the performance of the latest deep learning models in the classification of such utterances. Our proposed model achieves up to 92.00% accuracy in detecting unseen synthetic speech, which is a significant improvement from human performance (65.7%).","PeriodicalId":193702,"journal":{"name":"2021 International Conference on Speech Technology and Human-Computer Dialogue (SpeD)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125892988","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-10-13DOI: 10.1109/sped53181.2021.9587352
C. Vișan, Octavian Pascu, Marius Stanescu, H. Cucu, C. Diaconu, Andi Buzo, G. Pelz
In modern circuit design, highly specialized engineers are using computer tools to increase their chance of finding the best configurations, while decreasing the development time. However, certain tasks, like circuit sizing, consist of try and error processes that require the designer’s attention for a variable amount of time. The task duration is usually directly proportional to the complexity of the circuit. To minimize the R&D costs of the circuit, relieving the designer from the repetitive tasks is essential. Thus, the trend of replacing manual-based circuit sizing by AI solutions is growing. In this context, we are comparing the five most promising Evolutionary Algorithms for circuit sizing automation. The focus of this paper is to assess the performance of the algorithms in terms of versatility and population diversity.
{"title":"Versatility and Population Diversity of Evolutionary Algorithms in Automated Circuit Sizing Applications","authors":"C. Vișan, Octavian Pascu, Marius Stanescu, H. Cucu, C. Diaconu, Andi Buzo, G. Pelz","doi":"10.1109/sped53181.2021.9587352","DOIUrl":"https://doi.org/10.1109/sped53181.2021.9587352","url":null,"abstract":"In modern circuit design, highly specialized engineers are using computer tools to increase their chance of finding the best configurations, while decreasing the development time. However, certain tasks, like circuit sizing, consist of try and error processes that require the designer’s attention for a variable amount of time. The task duration is usually directly proportional to the complexity of the circuit. To minimize the R&D costs of the circuit, relieving the designer from the repetitive tasks is essential. Thus, the trend of replacing manual-based circuit sizing by AI solutions is growing. In this context, we are comparing the five most promising Evolutionary Algorithms for circuit sizing automation. The focus of this paper is to assess the performance of the algorithms in terms of versatility and population diversity.","PeriodicalId":193702,"journal":{"name":"2021 International Conference on Speech Technology and Human-Computer Dialogue (SpeD)","volume":"82 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130708327","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-10-13DOI: 10.1109/sped53181.2021.9587378
Svetlana Segarceanu, G. Suciu, I. Gavat
Environmental sound recognition is currently an important and valuable field of computer science and robotics, security or environmental protection. The underlying methodology evolved from primary speech application characteristic methods to more specific approaches, and with the advent of the deep learning paradigm many attempts using these methods arose. The paper reopens the research we have started on the application of the Feed Forward Neural Networks, by exploring several configurations, and introduces the Convolutional Neural Networks in our investigation. The experiments consider three classes of forest specific sounds and meant to detect the chainsaw sounds, vehicle, and genuine forest.
{"title":"Neural Networks for Automatic Environmental Sound Recognition","authors":"Svetlana Segarceanu, G. Suciu, I. Gavat","doi":"10.1109/sped53181.2021.9587378","DOIUrl":"https://doi.org/10.1109/sped53181.2021.9587378","url":null,"abstract":"Environmental sound recognition is currently an important and valuable field of computer science and robotics, security or environmental protection. The underlying methodology evolved from primary speech application characteristic methods to more specific approaches, and with the advent of the deep learning paradigm many attempts using these methods arose. The paper reopens the research we have started on the application of the Feed Forward Neural Networks, by exploring several configurations, and introduces the Convolutional Neural Networks in our investigation. The experiments consider three classes of forest specific sounds and meant to detect the chainsaw sounds, vehicle, and genuine forest.","PeriodicalId":193702,"journal":{"name":"2021 International Conference on Speech Technology and Human-Computer Dialogue (SpeD)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115016507","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-10-13DOI: 10.1109/sped53181.2021.9587416
Toma Telembici, L. Grama, Lorena Muscar, C. Rusu
The purpose of this paper is to obtain through simulations high correct classification rates for isolated audio events detection. To obtain the audio signals, we have used a service robot named TIAGo that simulates scenarios from our everyday life. Mel Frequency Cepstral Coefficients features will be extracted for each audio signal. Then will be classified based on the k-Nearest Neighbors algorithm. To better analyze the performance, besides Mel Frequency Cepstral Coefficients coefficients, 6 more coefficients, non- Mel Frequency Cepstral Coefficients, will be extracted. The number of neighbors for the k-Nearest Neighbors algorithm will vary and also the percent value that represents the number of audio signals used for training or for testing. Simulations will be done also about the metrics and distance. For this, Euclidean and Manhattan metric-distance will be implemented. All these scenarios and combinations of them will be perform through this paper. The highest correct classification rate, 99.27%, is obtained for Mel Frequency Cepstral Coefficients using 70% of input data for training, 5 neighbors and the Euclidean metric.
本文的目的是通过仿真得到孤立音频事件检测的高正确分类率。为了获得音频信号,我们使用了一个名为TIAGo的服务机器人来模拟我们日常生活中的场景。Mel频率倒谱系数特征将被提取为每个音频信号。然后根据k近邻算法进行分类。为了更好地分析性能,除了Mel频率倒谱系数外,还将提取6个非Mel频率倒谱系数。k近邻算法的邻居数量会有所不同,表示用于训练或测试的音频信号数量的百分比值也会有所不同。还将对度量和距离进行模拟。为此,欧几里得和曼哈顿公制距离将被实施。所有这些场景和它们的组合将通过本文来实现。使用70%的训练输入数据、5个邻域和欧几里得度量,Mel Frequency Cepstral Coefficients的分类正确率最高,达到99.27%。
{"title":"Results on the MFCC extraction for improving audio capabilities of TIAGo service robot","authors":"Toma Telembici, L. Grama, Lorena Muscar, C. Rusu","doi":"10.1109/sped53181.2021.9587416","DOIUrl":"https://doi.org/10.1109/sped53181.2021.9587416","url":null,"abstract":"The purpose of this paper is to obtain through simulations high correct classification rates for isolated audio events detection. To obtain the audio signals, we have used a service robot named TIAGo that simulates scenarios from our everyday life. Mel Frequency Cepstral Coefficients features will be extracted for each audio signal. Then will be classified based on the k-Nearest Neighbors algorithm. To better analyze the performance, besides Mel Frequency Cepstral Coefficients coefficients, 6 more coefficients, non- Mel Frequency Cepstral Coefficients, will be extracted. The number of neighbors for the k-Nearest Neighbors algorithm will vary and also the percent value that represents the number of audio signals used for training or for testing. Simulations will be done also about the metrics and distance. For this, Euclidean and Manhattan metric-distance will be implemented. All these scenarios and combinations of them will be perform through this paper. The highest correct classification rate, 99.27%, is obtained for Mel Frequency Cepstral Coefficients using 70% of input data for training, 5 neighbors and the Euclidean metric.","PeriodicalId":193702,"journal":{"name":"2021 International Conference on Speech Technology and Human-Computer Dialogue (SpeD)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128549520","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-10-13DOI: 10.1109/sped53181.2021.9587438
Adriana Stan, Beáta Lőrincz, Maria Nutu, M. Giurgiu
This paper introduces the MARA corpus, a large expressive Romanian speech corpus containing over 11 hours of high-quality data recorded by a professional female speaker. The data is orthographically transcribed, manually segmented at utterance level and semi-automatically aligned at phone-level. The associated text is processed by a complete linguistic feature extractor composed of: text normalisation, phonetic transcription, syllabification, lexical stress assignment, lemma extraction, part-of-speech tagging, chunking and dependency parsing.Using the MARA corpus, we evaluate the use of synthesised speech as training data in end-to-end speech synthesis systems. The synthesised data copies the original phone duration and F0 patterns of the most expressive utterances from MARA. Five systems with different sets of expressive data are trained. The objective and subjective results show that the low quality of the synthesised speech data is averaged out by the synthesis network, and that no statistically significant differences are found between the systems’ expressivity and naturalness evaluations.
{"title":"The MARA corpus: Expressivity in end-to-end TTS systems using synthesised speech data","authors":"Adriana Stan, Beáta Lőrincz, Maria Nutu, M. Giurgiu","doi":"10.1109/sped53181.2021.9587438","DOIUrl":"https://doi.org/10.1109/sped53181.2021.9587438","url":null,"abstract":"This paper introduces the MARA corpus, a large expressive Romanian speech corpus containing over 11 hours of high-quality data recorded by a professional female speaker. The data is orthographically transcribed, manually segmented at utterance level and semi-automatically aligned at phone-level. The associated text is processed by a complete linguistic feature extractor composed of: text normalisation, phonetic transcription, syllabification, lexical stress assignment, lemma extraction, part-of-speech tagging, chunking and dependency parsing.Using the MARA corpus, we evaluate the use of synthesised speech as training data in end-to-end speech synthesis systems. The synthesised data copies the original phone duration and F0 patterns of the most expressive utterances from MARA. Five systems with different sets of expressive data are trained. The objective and subjective results show that the low quality of the synthesised speech data is averaged out by the synthesis network, and that no statistically significant differences are found between the systems’ expressivity and naturalness evaluations.","PeriodicalId":193702,"journal":{"name":"2021 International Conference on Speech Technology and Human-Computer Dialogue (SpeD)","volume":"53 210 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125953475","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-10-13DOI: 10.1109/sped53181.2021.9587362
H. Teodorescu, Cecilia Bolea
We report on an automatic method and program for text structure discovery and subsequent segmentation of texts. The method, previously presented and herein enhanced, is based on stylistic features. The segmentation was applied to two self-biographic works; the results are compared and conclusions are derived. The method can be used as a tool in text generation, as a tool in editorial offices, and in literary analysis.
{"title":"Automatic Segmentation of Texts based on Stylistic Features","authors":"H. Teodorescu, Cecilia Bolea","doi":"10.1109/sped53181.2021.9587362","DOIUrl":"https://doi.org/10.1109/sped53181.2021.9587362","url":null,"abstract":"We report on an automatic method and program for text structure discovery and subsequent segmentation of texts. The method, previously presented and herein enhanced, is based on stylistic features. The segmentation was applied to two self-biographic works; the results are compared and conclusions are derived. The method can be used as a tool in text generation, as a tool in editorial offices, and in literary analysis.","PeriodicalId":193702,"journal":{"name":"2021 International Conference on Speech Technology and Human-Computer Dialogue (SpeD)","volume":"80 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114285500","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}