Generating Utterances for Companion Robots using Television Program Subtitles

Yuta Hagio, Makoto Okuda, Marina Kamimura, Yutaka Kaneko, H. Ohmata
{"title":"Generating Utterances for Companion Robots using Television Program Subtitles","authors":"Yuta Hagio, Makoto Okuda, Marina Kamimura, Yutaka Kaneko, H. Ohmata","doi":"10.1145/3573381.3596463","DOIUrl":null,"url":null,"abstract":"This study presents a method for generating utterances for companion robots that watch TV with people, using TV program subtitles. To enable the robot to automatically generate relevant utterances while watching TV, we created a dataset of approximately 12,000 utterances that were manually added to the collected TV subtitles. Using this dataset, we fine-tuned a large-scale language model to construct an utterance generation model. The proposed model generates utterances based on multiple keywords extracted from the subtitles as topics, while also taking into account the context of the subtitles by inputting them. The evaluation of the generated utterances revealed that approximately 88% of the sentences were natural Japanese, and approximately 75% were relevant and natural in the context of the TV program. Moreover, approximately 99% of the sentences contained the extracted keywords, indicating that our proposed method can generate diverse and contextually appropriate utterances containing the targeted topics. These findings provide evidence of the effectiveness of our approach in generating natural utterances for companion robots that watch TV with people.","PeriodicalId":120872,"journal":{"name":"Proceedings of the 2023 ACM International Conference on Interactive Media Experiences","volume":"64 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-06-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 2023 ACM International Conference on Interactive Media Experiences","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3573381.3596463","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

This study presents a method for generating utterances for companion robots that watch TV with people, using TV program subtitles. To enable the robot to automatically generate relevant utterances while watching TV, we created a dataset of approximately 12,000 utterances that were manually added to the collected TV subtitles. Using this dataset, we fine-tuned a large-scale language model to construct an utterance generation model. The proposed model generates utterances based on multiple keywords extracted from the subtitles as topics, while also taking into account the context of the subtitles by inputting them. The evaluation of the generated utterances revealed that approximately 88% of the sentences were natural Japanese, and approximately 75% were relevant and natural in the context of the TV program. Moreover, approximately 99% of the sentences contained the extracted keywords, indicating that our proposed method can generate diverse and contextually appropriate utterances containing the targeted topics. These findings provide evidence of the effectiveness of our approach in generating natural utterances for companion robots that watch TV with people.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
使用电视节目字幕为同伴机器人生成语音
本研究提出了一种利用电视节目字幕为与人一起看电视的同伴机器人生成语音的方法。为了使机器人能够在看电视时自动生成相关的话语,我们创建了一个大约12,000个话语的数据集,这些话语被手动添加到收集的电视字幕中。利用该数据集,我们对一个大规模的语言模型进行微调,以构建一个话语生成模型。该模型基于从字幕中提取的多个关键词作为主题生成话语,同时通过输入字幕来考虑字幕的上下文。对生成的话语的评估显示,大约88%的句子是自然的日语,大约75%的句子与电视节目的语境相关且自然。此外,大约99%的句子包含提取的关键词,表明我们的方法可以生成包含目标主题的多样化和上下文合适的话语。这些发现证明了我们的方法在为与人一起看电视的同伴机器人生成自然话语方面的有效性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Kinetic particles : from human pose estimation to an immersive and interactive piece of art questionning thought-movement relationships. Zenctuary VR: Simulating Nature in an Interactive Virtual Reality Application: Description of the design process of creating a garden in Virtual Reality with the aim of testing its restorative effects. Construction of immersive and interactive methodology based on physiological indicators to subjectively and objectively assess comfort and performances in work offices Referencing in YouTube Knowledge Communication Videos Subjective Test Environments: A Multifaceted Examination of Their Impact on Test Results
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1