{"title":"Emotional Musification","authors":"Andrew Godbout, Iulius A. T. Popa, J. Boyd","doi":"10.1145/3243274.3243303","DOIUrl":null,"url":null,"abstract":"We present a method for emotional musification that utilizes the musical game MUSE. We take advantage of the strong links between music and emotion to represent emotions as music. While we provide a prototype for measuring emotion using facial expression and physiological signals our sonification is not dependent on this. Rather we identify states within MUSE that elicit certain emotions and map those onto the arousal and valence spatial representation of emotion. In this way our efforts are compatible with emotion detection methods which can be mapped to arousal and valence. Because MUSE is based on states and state transitions we gain the ability to transition seamlessly from one state to another as new emotions are detected thus avoiding abrupt changes between music types.","PeriodicalId":129628,"journal":{"name":"Proceedings of the Audio Mostly 2018 on Sound in Immersion and Emotion","volume":"63 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the Audio Mostly 2018 on Sound in Immersion and Emotion","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3243274.3243303","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
We present a method for emotional musification that utilizes the musical game MUSE. We take advantage of the strong links between music and emotion to represent emotions as music. While we provide a prototype for measuring emotion using facial expression and physiological signals our sonification is not dependent on this. Rather we identify states within MUSE that elicit certain emotions and map those onto the arousal and valence spatial representation of emotion. In this way our efforts are compatible with emotion detection methods which can be mapped to arousal and valence. Because MUSE is based on states and state transitions we gain the ability to transition seamlessly from one state to another as new emotions are detected thus avoiding abrupt changes between music types.