使用现场表演者的多模态反馈增强音乐会体验

Kouyou Otsu, Hidekazu Takahashi, Hisato Fukuda, Yoshinori Kobayashi, Y. Kuno
{"title":"使用现场表演者的多模态反馈增强音乐会体验","authors":"Kouyou Otsu, Hidekazu Takahashi, Hisato Fukuda, Yoshinori Kobayashi, Y. Kuno","doi":"10.1109/HSI.2017.8005047","DOIUrl":null,"url":null,"abstract":"In this paper, we aim to enhance the interaction between the performer and the audience in live idol performances. We propose a system for converting the movements of individual members of an idol group into vibrations and their voices into light on handheld devices for the audience. Specifically, for each performer, the system acquires data on movement and voice magnitudes via an acceleration sensor attached to the right wrist and microphone. The obtained data is then converted into motor vibrations and lights from an LED. The receiving devices for the audience members come in the form of a pen light or doll. A prototype system was made to collect acceleration data and voice magnitude data measurements for our experiments with an idol group in Japan to verify whether the performer's movements and singing voice could be correctly measured during real live performance conditions. We developed a program to present the strength of the movements and singing voice corresponding to one of the members as vibrations and lights based on the information of the recorded data. Then, an experiment was conducted for eight subjects that observed the performance. We found that seven out of eight subjects could identify the idol performer with corresponding vibrations and lighting from the device.","PeriodicalId":355011,"journal":{"name":"2017 10th International Conference on Human System Interactions (HSI)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2017-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Enhanced concert experience using multimodal feedback from live performers\",\"authors\":\"Kouyou Otsu, Hidekazu Takahashi, Hisato Fukuda, Yoshinori Kobayashi, Y. Kuno\",\"doi\":\"10.1109/HSI.2017.8005047\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In this paper, we aim to enhance the interaction between the performer and the audience in live idol performances. We propose a system for converting the movements of individual members of an idol group into vibrations and their voices into light on handheld devices for the audience. Specifically, for each performer, the system acquires data on movement and voice magnitudes via an acceleration sensor attached to the right wrist and microphone. The obtained data is then converted into motor vibrations and lights from an LED. The receiving devices for the audience members come in the form of a pen light or doll. A prototype system was made to collect acceleration data and voice magnitude data measurements for our experiments with an idol group in Japan to verify whether the performer's movements and singing voice could be correctly measured during real live performance conditions. We developed a program to present the strength of the movements and singing voice corresponding to one of the members as vibrations and lights based on the information of the recorded data. Then, an experiment was conducted for eight subjects that observed the performance. We found that seven out of eight subjects could identify the idol performer with corresponding vibrations and lighting from the device.\",\"PeriodicalId\":355011,\"journal\":{\"name\":\"2017 10th International Conference on Human System Interactions (HSI)\",\"volume\":\"8 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2017-07-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2017 10th International Conference on Human System Interactions (HSI)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/HSI.2017.8005047\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2017 10th International Conference on Human System Interactions (HSI)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/HSI.2017.8005047","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

摘要

在本文中,我们的目标是在现场偶像表演中增强表演者和观众之间的互动。我们提出了一种系统,可以将偶像团体成员的动作转化为振动,并将他们的声音转化为手持设备上的光,供观众使用。具体来说,对于每个表演者,系统通过连接在右手腕和麦克风上的加速度传感器获取运动和声音大小的数据。然后将获得的数据转换为电机振动和LED发出的光。观众的接收设备以笔、灯或娃娃的形式出现。我们制作了一个原型系统来收集加速度数据和声级数据测量,用于我们在日本的一个偶像团体的实验,以验证在真实的现场表演条件下,表演者的动作和歌声是否可以正确测量。我们开发了一个程序,根据记录的数据信息,将其中一名成员的动作强度和歌声以振动和灯光的形式呈现出来。然后,对8名被试进行了实验。我们发现,8个实验对象中有7个可以通过设备的相应振动和灯光识别出偶像表演者。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Enhanced concert experience using multimodal feedback from live performers
In this paper, we aim to enhance the interaction between the performer and the audience in live idol performances. We propose a system for converting the movements of individual members of an idol group into vibrations and their voices into light on handheld devices for the audience. Specifically, for each performer, the system acquires data on movement and voice magnitudes via an acceleration sensor attached to the right wrist and microphone. The obtained data is then converted into motor vibrations and lights from an LED. The receiving devices for the audience members come in the form of a pen light or doll. A prototype system was made to collect acceleration data and voice magnitude data measurements for our experiments with an idol group in Japan to verify whether the performer's movements and singing voice could be correctly measured during real live performance conditions. We developed a program to present the strength of the movements and singing voice corresponding to one of the members as vibrations and lights based on the information of the recorded data. Then, an experiment was conducted for eight subjects that observed the performance. We found that seven out of eight subjects could identify the idol performer with corresponding vibrations and lighting from the device.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Text-to-speech of a talking robot for interactive speech training of hearing impaired Web and virtual reality as platforms to improve online education experiences Interacting with a conversational agent system for educational purposes in online courses Car plate recognition based on CNN using embedded system with GPU Comparative study of modern convolutional neural networks for smoke detection on image data
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1