首页 > 最新文献

2021 International Conference on Speech Technology and Human-Computer Dialogue (SpeD)最新文献

英文 中文
Autoregressive Articulatory WaveNet Flow for Speaker-Independent Acoustic-to-Articulatory Inversion 独立扬声器声-声反演的自回归发音波网流
Pub Date : 2021-10-13 DOI: 10.1109/sped53181.2021.9587350
Narjes Bozorg, Michael T. Johnson, M. Soleymanpour
In this paper we introduce a new speaker independent method for Acoustic-to-Articulatory Inversion. The proposed architecture, Speaker Independent-Articulatory WaveNet (SI-AWN), models the relationship between acoustic and articulatory features by conditioning the articulatory trajectories on acoustic features and then utilizes the structure for unseen target speakers. We evaluate the proposed SI-AWN on the Electro Magnetic Articulography corpus of Mandarin Accented English (EMA-MAE), using the pool of acoustic-articulatory information from 35 reference speakers and testing on target speakers that include male, female, native and non-native speakers. The results suggest that SI-AWN improves the performance of the acoustic-to-articulatory inversion process compared to the baseline Maximum Likelihood Regression-Parallel Reference Speaker Weighting (MLLR-PRSW) method by 21 percent. To the best of our knowledge, this is the first application of a WaveNet-like synthesis approach to the problem of Speaker Independent Acoustic-to-Articulatory Inversion, and results are comparable to or better than the best currently published systems.
本文提出了一种新的独立于说话人的声-发音反演方法。提出的“说话人独立-发音波网”(Speaker independent - articulation WaveNet, SI-AWN)结构通过将发音轨迹调节到声学特征上,对声学和发音特征之间的关系进行建模,然后将该结构用于看不见的目标说话人。我们使用来自35个参考说话者的声学-发音信息池,并对包括男性、女性、母语和非母语说话者在内的目标说话者进行测试,在普通话重音英语电磁发音语料库(EMA-MAE)上评估了所提出的SI-AWN。结果表明,与基线最大似然回归-平行参考说话人加权(MLLR-PRSW)方法相比,SI-AWN将声学-发音反演过程的性能提高了21%。据我们所知,这是第一次将类似wavenet的合成方法应用于扬声器独立声学-发音反转问题,其结果与目前发表的最好的系统相当或更好。
{"title":"Autoregressive Articulatory WaveNet Flow for Speaker-Independent Acoustic-to-Articulatory Inversion","authors":"Narjes Bozorg, Michael T. Johnson, M. Soleymanpour","doi":"10.1109/sped53181.2021.9587350","DOIUrl":"https://doi.org/10.1109/sped53181.2021.9587350","url":null,"abstract":"In this paper we introduce a new speaker independent method for Acoustic-to-Articulatory Inversion. The proposed architecture, Speaker Independent-Articulatory WaveNet (SI-AWN), models the relationship between acoustic and articulatory features by conditioning the articulatory trajectories on acoustic features and then utilizes the structure for unseen target speakers. We evaluate the proposed SI-AWN on the Electro Magnetic Articulography corpus of Mandarin Accented English (EMA-MAE), using the pool of acoustic-articulatory information from 35 reference speakers and testing on target speakers that include male, female, native and non-native speakers. The results suggest that SI-AWN improves the performance of the acoustic-to-articulatory inversion process compared to the baseline Maximum Likelihood Regression-Parallel Reference Speaker Weighting (MLLR-PRSW) method by 21 percent. To the best of our knowledge, this is the first application of a WaveNet-like synthesis approach to the problem of Speaker Independent Acoustic-to-Articulatory Inversion, and results are comparable to or better than the best currently published systems.","PeriodicalId":193702,"journal":{"name":"2021 International Conference on Speech Technology and Human-Computer Dialogue (SpeD)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126744968","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Infant Vocal Tract Development Analysis and Diagnosis by Cry Signals with CNN Age Classification 基于CNN年龄分类的婴儿声道发育分析与诊断
Pub Date : 2021-04-23 DOI: 10.1109/sped53181.2021.9587391
Chunyan Ji, Yi Pan
From crying to babbling and then to speech, infants’ vocal tract goes through anatomic restructuring. In this paper, we propose a non-invasive fast method of using infant cry signals with convolutional neural network (CNN) based age classification to diagnose the abnormality of vocal tract development as early as 4-month age. We study F0, F1, F2, spectrograms of the audio signals and relate them to the postnatal development of infant vocalization. We perform two age classification experiments: vocal tract development experiment and vocal tract development diagnosis experiment. The vocal tract development experiment trained on Baby2020 database discovers the pattern and tendency of the vocal tract changes, and the result matches the anatomical development of the vocal tract. The vocal tract development diagnosis experiment predicts the abnormality of infant vocal tract by classifying the cry signals into younger age category. The diagnosis model is trained on healthy infant cries from Baby2020 database. Cries from other infants in Baby2020 and Baby Chillanto database are used as testing sets. The diagnosis experiment yields 79.20% accuracy on healthy infants, 84.80% asphyxiated infant cries and 91.20% deaf cries are diagnosed as cries younger than 4-month although they are from infants up to 9-month-old. The results indicate the delayed developed cries are associated with abnormal vocal tract development.
从哭闹到咿呀学语再到说话,婴儿的声道经历了解剖学上的重构。在本文中,我们提出了一种基于卷积神经网络(CNN)的婴儿哭声信号的无创快速年龄分类方法,用于早在4月龄时诊断声道发育异常。我们研究了声音信号的F0, F1, F2谱图,并将它们与婴儿出生后的发声发育联系起来。我们进行了两个年龄分类实验:声道发育实验和声道发育诊断实验。在Baby2020数据库上训练的声道发育实验发现了声道变化的模式和趋势,结果与声道的解剖发育相吻合。声道发育诊断实验通过对婴儿哭声信号进行低龄分类来预测婴儿声道发育异常。该诊断模型是根据Baby2020数据库中的健康婴儿哭声进行训练的。Baby2020和Baby Chillanto数据库中其他婴儿的哭声被用作测试集。该诊断实验对健康婴儿的准确率为79.20%,对窒息婴儿哭声的准确率为84.80%,对失聪婴儿哭声的准确率为91.20%,尽管这些哭声来自9个月大的婴儿。结果表明,迟发性哭闹与声道发育异常有关。
{"title":"Infant Vocal Tract Development Analysis and Diagnosis by Cry Signals with CNN Age Classification","authors":"Chunyan Ji, Yi Pan","doi":"10.1109/sped53181.2021.9587391","DOIUrl":"https://doi.org/10.1109/sped53181.2021.9587391","url":null,"abstract":"From crying to babbling and then to speech, infants’ vocal tract goes through anatomic restructuring. In this paper, we propose a non-invasive fast method of using infant cry signals with convolutional neural network (CNN) based age classification to diagnose the abnormality of vocal tract development as early as 4-month age. We study F0, F1, F2, spectrograms of the audio signals and relate them to the postnatal development of infant vocalization. We perform two age classification experiments: vocal tract development experiment and vocal tract development diagnosis experiment. The vocal tract development experiment trained on Baby2020 database discovers the pattern and tendency of the vocal tract changes, and the result matches the anatomical development of the vocal tract. The vocal tract development diagnosis experiment predicts the abnormality of infant vocal tract by classifying the cry signals into younger age category. The diagnosis model is trained on healthy infant cries from Baby2020 database. Cries from other infants in Baby2020 and Baby Chillanto database are used as testing sets. The diagnosis experiment yields 79.20% accuracy on healthy infants, 84.80% asphyxiated infant cries and 91.20% deaf cries are diagnosed as cries younger than 4-month although they are from infants up to 9-month-old. The results indicate the delayed developed cries are associated with abnormal vocal tract development.","PeriodicalId":193702,"journal":{"name":"2021 International Conference on Speech Technology and Human-Computer Dialogue (SpeD)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123819171","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Effects of F0 Estimation Algorithms on Ultrasound-Based Silent Speech Interfaces F0估计算法对超声静音语音接口的影响
Pub Date : 2020-10-21 DOI: 10.1109/sped53181.2021.9587434
Peng Dai, M. Al-Radhi, T. Csapó
This paper shows recent Silent Speech Interface (SSI) progress that translates tongue motions into audible speech. In our previous work and also in the current study, the prediction of fundamental frequency (F0) from Ultra-Sound Tongue Images (UTI) was achieved using articulatory-to-acoustic mapping methods based on deep learning. Here we investigated several traditional discontinuous speech-based F0 estimation algorithms for the target of UTI-based SSI system. Besides, the vocoder parameters (F0, Maximum Voiced Frequency and Mel-Generalized Cepstrum) are predicted using deep neural networks, with UTI as input. We found that those discontinuous F0 algorithms are predicted with a lower error during the articulatory-to-acoustic mapping experiments. They result in slightly more natural synthesized speech than the baseline continuous F0 algorithm. Moreover, experimental results confirmed that discontinuous algorithms (e.g. Yin) are closest to original speech in objective metrics and subjective listening test.
本文介绍了将舌头运动转化为可听语言的无声语言界面(Silent Speech Interface, SSI)的最新进展。在我们之前的工作和当前的研究中,利用基于深度学习的发音到声学映射方法,从超音舌图像(UTI)中预测基频(F0)。本文研究了几种传统的基于不连续语音的F0估计算法。此外,以UTI为输入,使用深度神经网络预测声码器参数(F0,最大浊音频率和mel -广义倒谱)。我们发现,在发音-声学映射实验中,这些不连续F0算法的预测误差较低。它们产生的合成语音比基线连续F0算法稍微自然一些。此外,实验结果证实,在客观度量和主观听力测试中,不连续算法(如Yin)最接近原始语音。
{"title":"Effects of F0 Estimation Algorithms on Ultrasound-Based Silent Speech Interfaces","authors":"Peng Dai, M. Al-Radhi, T. Csapó","doi":"10.1109/sped53181.2021.9587434","DOIUrl":"https://doi.org/10.1109/sped53181.2021.9587434","url":null,"abstract":"This paper shows recent Silent Speech Interface (SSI) progress that translates tongue motions into audible speech. In our previous work and also in the current study, the prediction of fundamental frequency (F0) from Ultra-Sound Tongue Images (UTI) was achieved using articulatory-to-acoustic mapping methods based on deep learning. Here we investigated several traditional discontinuous speech-based F0 estimation algorithms for the target of UTI-based SSI system. Besides, the vocoder parameters (F0, Maximum Voiced Frequency and Mel-Generalized Cepstrum) are predicted using deep neural networks, with UTI as input. We found that those discontinuous F0 algorithms are predicted with a lower error during the articulatory-to-acoustic mapping experiments. They result in slightly more natural synthesized speech than the baseline continuous F0 algorithm. Moreover, experimental results confirmed that discontinuous algorithms (e.g. Yin) are closest to original speech in objective metrics and subjective listening test.","PeriodicalId":193702,"journal":{"name":"2021 International Conference on Speech Technology and Human-Computer Dialogue (SpeD)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129316861","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
2021 International Conference on Speech Technology and Human-Computer Dialogue (SpeD)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1