项目Vāc:文本转语音引擎能产生人类情感吗?

S. Kulkarni, Luis Barbado, Jordan Hosier, Yu Zhou, Siddharth Rajagopalan, V. Gurbani
{"title":"项目Vāc:文本转语音引擎能产生人类情感吗?","authors":"S. Kulkarni, Luis Barbado, Jordan Hosier, Yu Zhou, Siddharth Rajagopalan, V. Gurbani","doi":"10.1109/sped53181.2021.9587366","DOIUrl":null,"url":null,"abstract":"Sentiment analysis is an important area of natural language processing (NLP) research, and is increasingly being performed by machine learning models. Much of the work in this area is concentrated on extracting sentiment from textual data sources. Clearly however, a textual source does not convey the pitch, prosody, or power of the spoken sentiment, making it attractive to extract sentiments from an audio stream. A fundamental prerequisite for sentiment analysis on audio streams is the availability of reliable acoustic representation of sentiment, appropriately labeled. The lack of an existing, large-scale dataset in this form forces researchers to curate audio datasets from a variety of sources, often by manually labeling the audio corpus. However, this approach is inherently subjective. What appears “positive” to one human listener may appear “neutral” to another. Such challenges yield sub-optimal datasets that are often class imbalanced, and the inevitable biases present in the labeling process can permeate these models in problematic ways. To mitigate these disadvantages, we propose the use of a text-to-speech (TTS) engine to generate labeled synthetic voice samples rendered in one of three sentiments: positive, negative, or neutral. The advantage of using a TTS engine is that it can be abstracted as a function that generates an infinite set of labeled samples, on which a sentiment detection model can be trained. We investigate, in particular, the extent to which such training exhibits acceptable accuracy when the induced model is tested on a separate, independent and identically distributed speech source (i.e., the test dataset is not drawn from the same distribution as the training dataset). Our results indicate that this approach shows promise and the induced model does not suffer from underspecification.","PeriodicalId":193702,"journal":{"name":"2021 International Conference on Speech Technology and Human-Computer Dialogue (SpeD)","volume":"83 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Project Vāc: Can a Text-to-Speech Engine Generate Human Sentiments?\",\"authors\":\"S. Kulkarni, Luis Barbado, Jordan Hosier, Yu Zhou, Siddharth Rajagopalan, V. Gurbani\",\"doi\":\"10.1109/sped53181.2021.9587366\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Sentiment analysis is an important area of natural language processing (NLP) research, and is increasingly being performed by machine learning models. Much of the work in this area is concentrated on extracting sentiment from textual data sources. Clearly however, a textual source does not convey the pitch, prosody, or power of the spoken sentiment, making it attractive to extract sentiments from an audio stream. A fundamental prerequisite for sentiment analysis on audio streams is the availability of reliable acoustic representation of sentiment, appropriately labeled. The lack of an existing, large-scale dataset in this form forces researchers to curate audio datasets from a variety of sources, often by manually labeling the audio corpus. However, this approach is inherently subjective. What appears “positive” to one human listener may appear “neutral” to another. Such challenges yield sub-optimal datasets that are often class imbalanced, and the inevitable biases present in the labeling process can permeate these models in problematic ways. To mitigate these disadvantages, we propose the use of a text-to-speech (TTS) engine to generate labeled synthetic voice samples rendered in one of three sentiments: positive, negative, or neutral. The advantage of using a TTS engine is that it can be abstracted as a function that generates an infinite set of labeled samples, on which a sentiment detection model can be trained. We investigate, in particular, the extent to which such training exhibits acceptable accuracy when the induced model is tested on a separate, independent and identically distributed speech source (i.e., the test dataset is not drawn from the same distribution as the training dataset). Our results indicate that this approach shows promise and the induced model does not suffer from underspecification.\",\"PeriodicalId\":193702,\"journal\":{\"name\":\"2021 International Conference on Speech Technology and Human-Computer Dialogue (SpeD)\",\"volume\":\"83 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-10-13\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2021 International Conference on Speech Technology and Human-Computer Dialogue (SpeD)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/sped53181.2021.9587366\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 International Conference on Speech Technology and Human-Computer Dialogue (SpeD)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/sped53181.2021.9587366","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

情感分析是自然语言处理(NLP)研究的一个重要领域,并且越来越多地由机器学习模型来执行。该领域的大部分工作都集中在从文本数据源中提取情感。然而,很明显,文本来源并不能传达音调、韵律或口头情感的力量,因此从音频流中提取情感是很有吸引力的。对音频流进行情感分析的一个基本先决条件是情绪的可靠声学表示的可用性,并适当地标记。这种形式的现有大规模数据集的缺乏迫使研究人员从各种来源整理音频数据集,通常是通过手动标记音频语料库。然而,这种方法本质上是主观的。在一个听众看来是“积极”的东西,在另一个听众看来可能是“中立”的。这样的挑战会产生次优的数据集,这些数据集通常是类不平衡的,并且标记过程中不可避免的偏差会以有问题的方式渗透到这些模型中。为了减轻这些缺点,我们建议使用文本到语音(TTS)引擎来生成标记的合成语音样本,以三种情绪之一呈现:积极、消极或中性。使用TTS引擎的优点是,它可以抽象为一个函数,生成无限的标记样本集,在此基础上可以训练情感检测模型。我们特别研究了当诱导模型在一个单独的、独立的和相同分布的语音源上进行测试时,这种训练在多大程度上显示出可接受的准确性(即,测试数据集不是从与训练数据集相同的分布中提取的)。我们的结果表明,这种方法是有希望的,并且诱导模型不会受到规格不足的影响。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Project Vāc: Can a Text-to-Speech Engine Generate Human Sentiments?
Sentiment analysis is an important area of natural language processing (NLP) research, and is increasingly being performed by machine learning models. Much of the work in this area is concentrated on extracting sentiment from textual data sources. Clearly however, a textual source does not convey the pitch, prosody, or power of the spoken sentiment, making it attractive to extract sentiments from an audio stream. A fundamental prerequisite for sentiment analysis on audio streams is the availability of reliable acoustic representation of sentiment, appropriately labeled. The lack of an existing, large-scale dataset in this form forces researchers to curate audio datasets from a variety of sources, often by manually labeling the audio corpus. However, this approach is inherently subjective. What appears “positive” to one human listener may appear “neutral” to another. Such challenges yield sub-optimal datasets that are often class imbalanced, and the inevitable biases present in the labeling process can permeate these models in problematic ways. To mitigate these disadvantages, we propose the use of a text-to-speech (TTS) engine to generate labeled synthetic voice samples rendered in one of three sentiments: positive, negative, or neutral. The advantage of using a TTS engine is that it can be abstracted as a function that generates an infinite set of labeled samples, on which a sentiment detection model can be trained. We investigate, in particular, the extent to which such training exhibits acceptable accuracy when the induced model is tested on a separate, independent and identically distributed speech source (i.e., the test dataset is not drawn from the same distribution as the training dataset). Our results indicate that this approach shows promise and the induced model does not suffer from underspecification.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Automatic Segmentation of Texts based on Stylistic Features Romanian printed language, statistical independence and the type II statistical error Comparison in Suprasegmental Characteristics between Typical and Dysarthric Talkers at Varying Severity Levels Neural Networks for Automatic Environmental Sound Recognition Speaker Verification Experiments using Identity Vectors, on a Romanian Speakers Corpus
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1