Results on the MFCC extraction for improving audio capabilities of TIAGo service robot

Toma Telembici, L. Grama, Lorena Muscar, C. Rusu
{"title":"Results on the MFCC extraction for improving audio capabilities of TIAGo service robot","authors":"Toma Telembici, L. Grama, Lorena Muscar, C. Rusu","doi":"10.1109/sped53181.2021.9587416","DOIUrl":null,"url":null,"abstract":"The purpose of this paper is to obtain through simulations high correct classification rates for isolated audio events detection. To obtain the audio signals, we have used a service robot named TIAGo that simulates scenarios from our everyday life. Mel Frequency Cepstral Coefficients features will be extracted for each audio signal. Then will be classified based on the k-Nearest Neighbors algorithm. To better analyze the performance, besides Mel Frequency Cepstral Coefficients coefficients, 6 more coefficients, non- Mel Frequency Cepstral Coefficients, will be extracted. The number of neighbors for the k-Nearest Neighbors algorithm will vary and also the percent value that represents the number of audio signals used for training or for testing. Simulations will be done also about the metrics and distance. For this, Euclidean and Manhattan metric-distance will be implemented. All these scenarios and combinations of them will be perform through this paper. The highest correct classification rate, 99.27%, is obtained for Mel Frequency Cepstral Coefficients using 70% of input data for training, 5 neighbors and the Euclidean metric.","PeriodicalId":193702,"journal":{"name":"2021 International Conference on Speech Technology and Human-Computer Dialogue (SpeD)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 International Conference on Speech Technology and Human-Computer Dialogue (SpeD)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/sped53181.2021.9587416","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

Abstract

The purpose of this paper is to obtain through simulations high correct classification rates for isolated audio events detection. To obtain the audio signals, we have used a service robot named TIAGo that simulates scenarios from our everyday life. Mel Frequency Cepstral Coefficients features will be extracted for each audio signal. Then will be classified based on the k-Nearest Neighbors algorithm. To better analyze the performance, besides Mel Frequency Cepstral Coefficients coefficients, 6 more coefficients, non- Mel Frequency Cepstral Coefficients, will be extracted. The number of neighbors for the k-Nearest Neighbors algorithm will vary and also the percent value that represents the number of audio signals used for training or for testing. Simulations will be done also about the metrics and distance. For this, Euclidean and Manhattan metric-distance will be implemented. All these scenarios and combinations of them will be perform through this paper. The highest correct classification rate, 99.27%, is obtained for Mel Frequency Cepstral Coefficients using 70% of input data for training, 5 neighbors and the Euclidean metric.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
MFCC提取提高TIAGo服务机器人音频性能的研究结果
本文的目的是通过仿真得到孤立音频事件检测的高正确分类率。为了获得音频信号,我们使用了一个名为TIAGo的服务机器人来模拟我们日常生活中的场景。Mel频率倒谱系数特征将被提取为每个音频信号。然后根据k近邻算法进行分类。为了更好地分析性能,除了Mel频率倒谱系数外,还将提取6个非Mel频率倒谱系数。k近邻算法的邻居数量会有所不同,表示用于训练或测试的音频信号数量的百分比值也会有所不同。还将对度量和距离进行模拟。为此,欧几里得和曼哈顿公制距离将被实施。所有这些场景和它们的组合将通过本文来实现。使用70%的训练输入数据、5个邻域和欧几里得度量,Mel Frequency Cepstral Coefficients的分类正确率最高,达到99.27%。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Automatic Segmentation of Texts based on Stylistic Features Romanian printed language, statistical independence and the type II statistical error Comparison in Suprasegmental Characteristics between Typical and Dysarthric Talkers at Varying Severity Levels Neural Networks for Automatic Environmental Sound Recognition Speaker Verification Experiments using Identity Vectors, on a Romanian Speakers Corpus
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1