Evolving in-game mood-expressive music with MetaCompose

Marco Scirea, Peter W. Eklund, J. Togelius, S. Risi
{"title":"Evolving in-game mood-expressive music with MetaCompose","authors":"Marco Scirea, Peter W. Eklund, J. Togelius, S. Risi","doi":"10.1145/3243274.3243292","DOIUrl":null,"url":null,"abstract":"MetaCompose is a music generator based on a hybrid evolutionary technique that combines FI-2POP and multi-objective optimization. In this paper we employ the MetaCompose music generator to create music in real-time that expresses different mood-states in a game-playing environment (Checkers). In particular, this paper focuses on determining if differences in player experience can be observed when: (i) using affective-dynamic music compared to static music, and (ii) the music supports the game's internal narrative/state. Participants were tasked to play two games of Checkers while listening to two (out of three) different set-ups of game-related generated music. The possible set-ups were: static expression, consistent affective expression, and random affective expression. During game-play players wore a E4 Wristband, allowing various physiological measures to be recorded such as blood volume pulse (BVP) and electromyographic activity (EDA). The data collected confirms a hypothesis based on three out of four criteria (engagement, music quality, coherency with game excitement, and coherency with performance) that players prefer dynamic affective music when asked to reflect on the current game-state. In the future this system could allow designers/composers to easily create affective and dynamic soundtracks for interactive applications.","PeriodicalId":129628,"journal":{"name":"Proceedings of the Audio Mostly 2018 on Sound in Immersion and Emotion","volume":"165 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"11","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the Audio Mostly 2018 on Sound in Immersion and Emotion","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3243274.3243292","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 11

Abstract

MetaCompose is a music generator based on a hybrid evolutionary technique that combines FI-2POP and multi-objective optimization. In this paper we employ the MetaCompose music generator to create music in real-time that expresses different mood-states in a game-playing environment (Checkers). In particular, this paper focuses on determining if differences in player experience can be observed when: (i) using affective-dynamic music compared to static music, and (ii) the music supports the game's internal narrative/state. Participants were tasked to play two games of Checkers while listening to two (out of three) different set-ups of game-related generated music. The possible set-ups were: static expression, consistent affective expression, and random affective expression. During game-play players wore a E4 Wristband, allowing various physiological measures to be recorded such as blood volume pulse (BVP) and electromyographic activity (EDA). The data collected confirms a hypothesis based on three out of four criteria (engagement, music quality, coherency with game excitement, and coherency with performance) that players prefer dynamic affective music when asked to reflect on the current game-state. In the future this system could allow designers/composers to easily create affective and dynamic soundtracks for interactive applications.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
使用MetaCompose改进游戏内的情绪表达音乐
MetaCompose是一个基于FI-2POP和多目标优化相结合的混合进化技术的音乐生成器。在本文中,我们使用MetaCompose音乐生成器来实时创建音乐,以表达游戏环境(跳棋)中的不同情绪状态。本文特别关注的是,在以下情况下是否能够观察到玩家体验的差异:(1)与静态音乐相比,使用情感动态音乐;(2)音乐支持游戏的内部叙述/状态。参与者被要求玩两盘跳棋,同时听两种(三种中的两种)不同的与游戏相关的生成音乐。可能的设置有:静态表达、一致情感表达和随机情感表达。在游戏过程中,玩家佩戴E4腕带,可以记录各种生理指标,如血容量脉搏(BVP)和肌电活动(EDA)。收集到的数据证实了基于4个标准中的3个(游戏邦注:即用户粘性、音乐质量、与游戏兴奋感的一致性以及与表现的一致性)的假设,即当玩家被要求反映当前游戏状态时,他们更喜欢动态的情感音乐。在未来,这个系统可以让设计师/作曲家轻松地为交互式应用程序创建情感和动态音轨。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Emotional Musification Smart Mandolin: autobiographical design, implementation, use cases, and lessons learned The London Soundmap: Integrating sonic interaction design in the urban realm On Transformations between Paradigms in Audio Programming The Perceived Emotion of Isolated Synthetic Audio: The EmoSynth Dataset and Results
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1