面向多种情绪识别的情绪语音数据库的建立与分析

Ryota Sato, Ryohei Sasaki, Norisato Suga, T. Furukawa
{"title":"面向多种情绪识别的情绪语音数据库的建立与分析","authors":"Ryota Sato, Ryohei Sasaki, Norisato Suga, T. Furukawa","doi":"10.1109/O-COCOSDA50338.2020.9295041","DOIUrl":null,"url":null,"abstract":"Speech emotion recognition (SER) is one of the latest challenge in human-computer interaction. In conventional SER classification methods, a single emotion label is outputted per one utterance as the estimation result. This is because conventional speech emotional databases which are used to train SER models have a single emotion label for one utterance. However, it is often the case that multiple emotions are expressed simultaneously with different intensities in human speech. In order to realize more natural SER than ever, existence of multiple emotions in one utterance should be taken into account. Therefore, we created an emotional speech database which contains multiple emotions and their intensities labels. The creation experiment was conducted by extracting speech utterance parts where emotions appear from existing video works. In addition, we evaluated the created database by performing statistical analysis on the database. As a result, 2,025 samples were obtained, of which 1,525 samples contained multiple emotions.","PeriodicalId":385266,"journal":{"name":"2020 23rd Conference of the Oriental COCOSDA International Committee for the Co-ordination and Standardisation of Speech Databases and Assessment Techniques (O-COCOSDA)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":"{\"title\":\"Creation and Analysis of Emotional Speech Database for Multiple Emotions Recognition\",\"authors\":\"Ryota Sato, Ryohei Sasaki, Norisato Suga, T. Furukawa\",\"doi\":\"10.1109/O-COCOSDA50338.2020.9295041\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Speech emotion recognition (SER) is one of the latest challenge in human-computer interaction. In conventional SER classification methods, a single emotion label is outputted per one utterance as the estimation result. This is because conventional speech emotional databases which are used to train SER models have a single emotion label for one utterance. However, it is often the case that multiple emotions are expressed simultaneously with different intensities in human speech. In order to realize more natural SER than ever, existence of multiple emotions in one utterance should be taken into account. Therefore, we created an emotional speech database which contains multiple emotions and their intensities labels. The creation experiment was conducted by extracting speech utterance parts where emotions appear from existing video works. In addition, we evaluated the created database by performing statistical analysis on the database. As a result, 2,025 samples were obtained, of which 1,525 samples contained multiple emotions.\",\"PeriodicalId\":385266,\"journal\":{\"name\":\"2020 23rd Conference of the Oriental COCOSDA International Committee for the Co-ordination and Standardisation of Speech Databases and Assessment Techniques (O-COCOSDA)\",\"volume\":\"7 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2020-11-05\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"4\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2020 23rd Conference of the Oriental COCOSDA International Committee for the Co-ordination and Standardisation of Speech Databases and Assessment Techniques (O-COCOSDA)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/O-COCOSDA50338.2020.9295041\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 23rd Conference of the Oriental COCOSDA International Committee for the Co-ordination and Standardisation of Speech Databases and Assessment Techniques (O-COCOSDA)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/O-COCOSDA50338.2020.9295041","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 4

摘要

语音情感识别(SER)是人机交互领域的最新挑战之一。在传统的SER分类方法中,每一个话语输出一个情感标签作为估计结果。这是因为用于训练SER模型的传统语音情感数据库对一个话语有一个单一的情感标签。然而,在人类语言中,多种情绪往往以不同的强度同时表达。为了实现比以往任何时候都更自然的SER,应该考虑到一个话语中存在多种情感。因此,我们创建了一个包含多种情绪及其强度标签的情绪语音数据库。创作实验是通过从现有的视频作品中提取出现情感的语音话语部分进行的。此外,我们通过对数据库执行统计分析来评估创建的数据库。结果,获得了2025个样本,其中1525个样本包含多种情绪。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Creation and Analysis of Emotional Speech Database for Multiple Emotions Recognition
Speech emotion recognition (SER) is one of the latest challenge in human-computer interaction. In conventional SER classification methods, a single emotion label is outputted per one utterance as the estimation result. This is because conventional speech emotional databases which are used to train SER models have a single emotion label for one utterance. However, it is often the case that multiple emotions are expressed simultaneously with different intensities in human speech. In order to realize more natural SER than ever, existence of multiple emotions in one utterance should be taken into account. Therefore, we created an emotional speech database which contains multiple emotions and their intensities labels. The creation experiment was conducted by extracting speech utterance parts where emotions appear from existing video works. In addition, we evaluated the created database by performing statistical analysis on the database. As a result, 2,025 samples were obtained, of which 1,525 samples contained multiple emotions.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
A Front-End Technique for Automatic Noisy Speech Recognition Improving Valence Prediction in Dimensional Speech Emotion Recognition Using Linguistic Information A Comparative Study of Named Entity Recognition on Myanmar Language Intent Classification on Myanmar Social Media Data in Telecommunication Domain Using Convolutional Neural Network and Word2Vec Prosodic Information-Assisted DNN-based Mandarin Spontaneous-Speech Recognition
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1