首页 > 最新文献

Proceedings of the Audio Mostly 2018 on Sound in Immersion and Emotion最新文献

英文 中文
Acoustic Vehicle Alerting Systems: Will they affect the acceptance of electric vehicles? 声学车辆警报系统:它们会影响电动车的接受度吗?
Pub Date : 2018-09-12 DOI: 10.1145/3243274.3243305
Johan Fagerlönn, Anna Sirkka, Stefan Lindberg, R. Johnsson
Vehicles powered by electric motors can be very quiet at low speeds, which can lead to new road safety issues. The European Parliament has decided that quiet vehicles should be equipped with an Acoustic Vehicle Alerting System (AVAS). The main purpose of the studies presented in this paper was to investigate whether future requirements could affect people's acceptance of electric vehicles (EVs). The strategy in the first study was to create an immersive, simulated auditory environment where people could experience the sounds of future traffic situations. The second study was conducted with a car on a test track. The results suggest that the requirements are not likely to have a major negative effect on people's experience of EVs or willingness to buy an EV. However, the sounds can have a certain negative effect on emotional response and acceptance, which should be considered by manufacturers. The results of the test track study indicate that unprotected road users may appreciate the function of an AVAS sound. The work did not reveal any large differences between AVAS sounds. But in the simulated environment, sounds designed to resemble an internal combustion engine tended to receive more positive scores.
电动汽车在低速行驶时会非常安静,这可能会导致新的道路安全问题。欧洲议会已经决定,静音车辆应该配备声学车辆警报系统(AVAS)。本文研究的主要目的是调查未来的需求是否会影响人们对电动汽车的接受程度。第一项研究的策略是创造一个沉浸式的模拟听觉环境,让人们可以体验未来交通状况的声音。第二项研究是在测试轨道上的一辆汽车上进行的。结果表明,这些要求不太可能对人们的电动汽车体验或购买电动汽车的意愿产生重大负面影响。但是,声音会对情绪反应和接受度产生一定的负面影响,这是制造商应该考虑的。试验轨道研究的结果表明,无保护的道路使用者可能会欣赏AVAS声音的功能。这项工作没有揭示AVAS声音之间的任何大差异。但在模拟环境中,设计成类似内燃机的声音往往会得到更多的好评。
{"title":"Acoustic Vehicle Alerting Systems: Will they affect the acceptance of electric vehicles?","authors":"Johan Fagerlönn, Anna Sirkka, Stefan Lindberg, R. Johnsson","doi":"10.1145/3243274.3243305","DOIUrl":"https://doi.org/10.1145/3243274.3243305","url":null,"abstract":"Vehicles powered by electric motors can be very quiet at low speeds, which can lead to new road safety issues. The European Parliament has decided that quiet vehicles should be equipped with an Acoustic Vehicle Alerting System (AVAS). The main purpose of the studies presented in this paper was to investigate whether future requirements could affect people's acceptance of electric vehicles (EVs). The strategy in the first study was to create an immersive, simulated auditory environment where people could experience the sounds of future traffic situations. The second study was conducted with a car on a test track. The results suggest that the requirements are not likely to have a major negative effect on people's experience of EVs or willingness to buy an EV. However, the sounds can have a certain negative effect on emotional response and acceptance, which should be considered by manufacturers. The results of the test track study indicate that unprotected road users may appreciate the function of an AVAS sound. The work did not reveal any large differences between AVAS sounds. But in the simulated environment, sounds designed to resemble an internal combustion engine tended to receive more positive scores.","PeriodicalId":129628,"journal":{"name":"Proceedings of the Audio Mostly 2018 on Sound in Immersion and Emotion","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133899514","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
An Immersive Approach to 3D-Spatialized Music Composition: Tools and Pilot Survey 3d空间化音乐创作的沉浸式方法:工具和试点调查
Pub Date : 2018-09-12 DOI: 10.1145/3243274.3243300
D. Ledoux, R. Normandeau
Open-sourced 3D sound spatialisation software tools, developed by the Groupe de Recherche en Immersion Spatiale (GRIS) at Université de Montréal, were used as an integrated part of two music compositions, in an immersive, object-based audio approach. A preliminary listening experience has been conducted on two separate groups of students, in a 32.2 loudspeakers dome, as a pilot for a case study that aims to get a better sense of the immersive affect of complex spatialized compositions through the listener's reception behaviors. Data collected from their comments on these two different 3D-spatialized musics have been analysed to extract converging expressions of immersive qualities.
开放源码的3D声音空间化软件工具,由universit de montral的Groupe de Recherche en Immersion Spatiale (GRIS)开发,被用作两首音乐作品的集成部分,以一种沉浸式的、基于对象的音频方法。在一个32.2扬声器的穹顶中,对两组学生进行了初步的聆听体验,作为案例研究的试点,旨在通过听者的接收行为更好地了解复杂空间化作品的沉浸感。从他们对这两种不同的3d空间化音乐的评论中收集的数据进行了分析,以提取沉浸式品质的收敛表达。
{"title":"An Immersive Approach to 3D-Spatialized Music Composition: Tools and Pilot Survey","authors":"D. Ledoux, R. Normandeau","doi":"10.1145/3243274.3243300","DOIUrl":"https://doi.org/10.1145/3243274.3243300","url":null,"abstract":"Open-sourced 3D sound spatialisation software tools, developed by the Groupe de Recherche en Immersion Spatiale (GRIS) at Université de Montréal, were used as an integrated part of two music compositions, in an immersive, object-based audio approach. A preliminary listening experience has been conducted on two separate groups of students, in a 32.2 loudspeakers dome, as a pilot for a case study that aims to get a better sense of the immersive affect of complex spatialized compositions through the listener's reception behaviors. Data collected from their comments on these two different 3D-spatialized musics have been analysed to extract converging expressions of immersive qualities.","PeriodicalId":129628,"journal":{"name":"Proceedings of the Audio Mostly 2018 on Sound in Immersion and Emotion","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134539733","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Music retiler: Using NMF2D source separation for audio mosaicing 音乐拼接:使用NMF2D源分离音频拼接
Pub Date : 2018-09-12 DOI: 10.1145/3243274.3243299
H. F. Aarabi, G. Peeters
Musaicing (music mosaicing) aims at reconstructing a target music track by superimposing audio samples selected from a collection. This selection is based on their acoustic similarity to the target. The baseline technique to perform this is concatenative synthesis in which the superposition only occurs in time. Non-Negative Matrix Factorization has also been proposed for this task. In this, a target spectrogram is factorized into an activation matrix and a predefined basis matrix which represents the sample collection. The superposition therefore occurs in time and frequency. However, in both methods the samples used for the reconstruction represent isolated sources (such as bees) and remain unchanged during the musaicing (samples need to be pre-pitch-shifted). This reduces the applicability of these methods. We propose here a variation of the musaicing in which the samples used for the reconstruction are obtained by applying a NMF2D separation algorithm to a music collection (such as a collection of Reggae tracks). Using these separated samples, a second NMF2D algorithm is then used to automatically find the best transposition factors to represent the target. We performed an online perceptual experiment of our method which shows that it outperforms the NMF algorithm when the sources are polyphonic and multi-source.
Musaicing(音乐拼接)的目的是通过叠加从集合中选择的音频样本来重建目标音乐轨道。这种选择是基于它们与目标的声学相似性。执行此操作的基线技术是串联合成,其中叠加仅在时间上发生。非负矩阵分解也被提出用于此任务。在这种方法中,目标谱图被分解成一个激活矩阵和一个预定义的基矩阵,基矩阵表示样本集合。因此,叠加发生在时间和频率上。然而,在这两种方法中,用于重建的样本都是孤立的来源(如蜜蜂),并且在musaicing过程中保持不变(样本需要预移音高)。这降低了这些方法的适用性。我们在这里提出了一种变体的musaicing,其中用于重建的样本是通过将NMF2D分离算法应用于音乐集合(例如雷鬼曲目的集合)获得的。使用这些分离的样本,然后使用第二个NMF2D算法自动找到代表目标的最佳转置因子。我们对该方法进行了在线感知实验,结果表明该方法在多声源和多声源情况下优于NMF算法。
{"title":"Music retiler: Using NMF2D source separation for audio mosaicing","authors":"H. F. Aarabi, G. Peeters","doi":"10.1145/3243274.3243299","DOIUrl":"https://doi.org/10.1145/3243274.3243299","url":null,"abstract":"Musaicing (music mosaicing) aims at reconstructing a target music track by superimposing audio samples selected from a collection. This selection is based on their acoustic similarity to the target. The baseline technique to perform this is concatenative synthesis in which the superposition only occurs in time. Non-Negative Matrix Factorization has also been proposed for this task. In this, a target spectrogram is factorized into an activation matrix and a predefined basis matrix which represents the sample collection. The superposition therefore occurs in time and frequency. However, in both methods the samples used for the reconstruction represent isolated sources (such as bees) and remain unchanged during the musaicing (samples need to be pre-pitch-shifted). This reduces the applicability of these methods. We propose here a variation of the musaicing in which the samples used for the reconstruction are obtained by applying a NMF2D separation algorithm to a music collection (such as a collection of Reggae tracks). Using these separated samples, a second NMF2D algorithm is then used to automatically find the best transposition factors to represent the target. We performed an online perceptual experiment of our method which shows that it outperforms the NMF algorithm when the sources are polyphonic and multi-source.","PeriodicalId":129628,"journal":{"name":"Proceedings of the Audio Mostly 2018 on Sound in Immersion and Emotion","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123821971","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
A Web-based Real-Time Kinect Application for Gestural Interaction with Virtual Musical Instruments 基于网络的实时Kinect手势与虚拟乐器交互应用
Pub Date : 2018-09-12 DOI: 10.1145/3243274.3243297
Athanasia Zlatintsi, P. Filntisis, Christos Garoufis, A. Tsiami, Kosmas Kritsis, Maximos A. Kaliakatsos-Papakostas, Aggelos Gkiokas, V. Katsouros, P. Maragos
We present a web-based real-time application that enables gestural interaction with virtual instruments for musical expression. Skeletons of the users are tracked by a Kinect sensor, while the performance of the virtual instruments is accomplished using gestures inspired from their corresponding physical counterparts. The application supports the virtual performance of an air guitar and an upright bass, as well as a more abstract conductor-like performance with two instruments, while collaborative playing of two or more players is also allowed. The multimodal virtual interface of our application, which includes 3D avatars, allows users, even if not musically educated, to engage in innovative interactive musical activities, while its web-based architecture improves its accessibility and performance. The application was qualitatively evaluated by 13 users, in terms of its usability and enjoyability, among others, accomplishing high ratings and positive feedback.
我们提出了一个基于网络的实时应用程序,可以与虚拟乐器进行手势交互,用于音乐表达。用户的骨架由Kinect传感器跟踪,而虚拟乐器的表演则是通过从相应的物理乐器中获得灵感的手势来完成的。该应用程序支持空气吉他和直立低音的虚拟表演,以及使用两种乐器进行更抽象的指挥表演,同时也允许两个或更多玩家合作演奏。我们的应用程序的多模态虚拟界面,其中包括3D化身,允许用户,即使没有音乐教育,参与创新的互动音乐活动,而其基于web的架构提高了其可访问性和性能。该应用程序由13名用户进行了定性评估,包括其可用性和可享受性等,获得了很高的评价和积极的反馈。
{"title":"A Web-based Real-Time Kinect Application for Gestural Interaction with Virtual Musical Instruments","authors":"Athanasia Zlatintsi, P. Filntisis, Christos Garoufis, A. Tsiami, Kosmas Kritsis, Maximos A. Kaliakatsos-Papakostas, Aggelos Gkiokas, V. Katsouros, P. Maragos","doi":"10.1145/3243274.3243297","DOIUrl":"https://doi.org/10.1145/3243274.3243297","url":null,"abstract":"We present a web-based real-time application that enables gestural interaction with virtual instruments for musical expression. Skeletons of the users are tracked by a Kinect sensor, while the performance of the virtual instruments is accomplished using gestures inspired from their corresponding physical counterparts. The application supports the virtual performance of an air guitar and an upright bass, as well as a more abstract conductor-like performance with two instruments, while collaborative playing of two or more players is also allowed. The multimodal virtual interface of our application, which includes 3D avatars, allows users, even if not musically educated, to engage in innovative interactive musical activities, while its web-based architecture improves its accessibility and performance. The application was qualitatively evaluated by 13 users, in terms of its usability and enjoyability, among others, accomplishing high ratings and positive feedback.","PeriodicalId":129628,"journal":{"name":"Proceedings of the Audio Mostly 2018 on Sound in Immersion and Emotion","volume":"114 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124918040","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Playing the Body: Making Music through Various Body Movements 演奏身体:通过各种身体动作制作音乐
Pub Date : 2018-09-12 DOI: 10.1145/3243274.3243287
Junko Ichino, Hayato Nao
We explore a bodily interaction as a creative experience to support musical expression. This paper discusses an interactive system---Playing the Body---that supports the creative activity of composing music by incorporating large body movements in space. In order to encourage the user to form an overall image of the melody in the early stages of composition, the proposed system supports interaction using the whole body to generate a melody. Then, after going through a trial-and-error stage, it provides a refinement stage that encourages introspection, refining the melody to make the sound more consistent with the ideal image. This is done by supporting interaction using the hands and arms, which have a greater degree of freedom. In a pilot study, positive responses were obtained regarding the creation of a melody using the whole body. Future work includes improving the use of the hands and arms to refine the melody.
我们探索身体互动作为一种创造性的经验,以支持音乐表达。本文讨论了一个互动系统——演奏身体——通过在空间中结合身体的大动作来支持作曲的创造性活动。为了鼓励用户在作曲的早期阶段形成旋律的整体形象,提出的系统支持使用整个身体来生成旋律的交互。然后,在经历了试错阶段之后,它提供了一个改进阶段,鼓励自省,改进旋律,使声音更符合理想的形象。这是通过支持使用手和手臂的交互来实现的,它们具有更大的自由度。在一项初步研究中,在使用整个身体创作旋律方面获得了积极的反应。未来的工作包括改进手和手臂的使用来完善旋律。
{"title":"Playing the Body: Making Music through Various Body Movements","authors":"Junko Ichino, Hayato Nao","doi":"10.1145/3243274.3243287","DOIUrl":"https://doi.org/10.1145/3243274.3243287","url":null,"abstract":"We explore a bodily interaction as a creative experience to support musical expression. This paper discusses an interactive system---Playing the Body---that supports the creative activity of composing music by incorporating large body movements in space. In order to encourage the user to form an overall image of the melody in the early stages of composition, the proposed system supports interaction using the whole body to generate a melody. Then, after going through a trial-and-error stage, it provides a refinement stage that encourages introspection, refining the melody to make the sound more consistent with the ideal image. This is done by supporting interaction using the hands and arms, which have a greater degree of freedom. In a pilot study, positive responses were obtained regarding the creation of a melody using the whole body. Future work includes improving the use of the hands and arms to refine the melody.","PeriodicalId":129628,"journal":{"name":"Proceedings of the Audio Mostly 2018 on Sound in Immersion and Emotion","volume":"56 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121272444","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Staging sonic atmospheres as the new aesthetic work 作为新美学作品的声音氛围
Pub Date : 2018-09-12 DOI: 10.1145/3243274.3243286
E. Toppano, Alessandro Toppano
Our primary concern in this paper is to bring attention to the promising, yet largely unexplored concept of atmosphere in sound design. Although this notion is not new, we approach it from a novel perspective i.e., New Phenomenology and New Aesthetics. Accordingly, we review some basic theoretical results in these fields and try to explore their possible application in the sonic context. In particular, the paper: i) compares the concept of sonic atmosphere with the notions of acoustic environment and soundscape by articulating salient elements that constitute each concept, ii) discusses some consequences of the above distinction with respect to the understanding of emotion and immersion, and, finally, iii) provides some initial suggestions about how to design for emotions.
在本文中,我们主要关注的是声音设计中充满希望但尚未被探索的氛围概念。虽然这一概念并不新鲜,但我们从新现象学和新美学的角度来看待它。因此,我们回顾了这些领域的一些基本理论成果,并试图探讨它们在声学环境中的可能应用。特别是,本文:i)通过阐述构成每个概念的突出元素,将声音氛围的概念与声环境和音景的概念进行比较,ii)讨论上述区别在理解情感和沉浸感方面的一些后果,最后,iii)就如何为情感设计提供一些初步建议。
{"title":"Staging sonic atmospheres as the new aesthetic work","authors":"E. Toppano, Alessandro Toppano","doi":"10.1145/3243274.3243286","DOIUrl":"https://doi.org/10.1145/3243274.3243286","url":null,"abstract":"Our primary concern in this paper is to bring attention to the promising, yet largely unexplored concept of atmosphere in sound design. Although this notion is not new, we approach it from a novel perspective i.e., New Phenomenology and New Aesthetics. Accordingly, we review some basic theoretical results in these fields and try to explore their possible application in the sonic context. In particular, the paper: i) compares the concept of sonic atmosphere with the notions of acoustic environment and soundscape by articulating salient elements that constitute each concept, ii) discusses some consequences of the above distinction with respect to the understanding of emotion and immersion, and, finally, iii) provides some initial suggestions about how to design for emotions.","PeriodicalId":129628,"journal":{"name":"Proceedings of the Audio Mostly 2018 on Sound in Immersion and Emotion","volume":"256 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115870097","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Procedurally-Generated Audio for Soft-Body Animations 软体动画的程序生成音频
Pub Date : 2018-09-12 DOI: 10.1145/3243274.3243285
Feng Su, C. Joslin
Procedurally-generated audio is an important method for the automatic synthesis of realistic sounds for computer animations and virtual environments. While synthesis techniques for rigid bodies have been well studied, few publications have tackled the challenges of synthesizing sounds for soft bodies. In this paper, we propose a data-driven synthesis approach to simultaneously generate audio based on certain given soft-body animations. Our method uses granular synthesis to extract a database of sound from real-world recordings and then retarget these grains of sounds based on the motion of any input animations. We demonstrate the effectiveness of this method on a variety of soft-body animations including a basketball bouncing, apple slicing, hand clapping and a jelly simulation.
程序生成音频是为计算机动画和虚拟环境自动合成逼真声音的重要方法。虽然刚体的合成技术已经得到了很好的研究,但很少有出版物解决了软体合成声音的挑战。在本文中,我们提出了一种数据驱动的合成方法,可以根据给定的软体动画同时生成音频。我们的方法使用颗粒合成从真实世界的录音中提取声音数据库,然后根据任何输入动画的运动重新定位这些声音颗粒。我们在包括篮球弹跳、苹果切片、拍手和果冻模拟在内的各种软体动画中证明了这种方法的有效性。
{"title":"Procedurally-Generated Audio for Soft-Body Animations","authors":"Feng Su, C. Joslin","doi":"10.1145/3243274.3243285","DOIUrl":"https://doi.org/10.1145/3243274.3243285","url":null,"abstract":"Procedurally-generated audio is an important method for the automatic synthesis of realistic sounds for computer animations and virtual environments. While synthesis techniques for rigid bodies have been well studied, few publications have tackled the challenges of synthesizing sounds for soft bodies. In this paper, we propose a data-driven synthesis approach to simultaneously generate audio based on certain given soft-body animations. Our method uses granular synthesis to extract a database of sound from real-world recordings and then retarget these grains of sounds based on the motion of any input animations. We demonstrate the effectiveness of this method on a variety of soft-body animations including a basketball bouncing, apple slicing, hand clapping and a jelly simulation.","PeriodicalId":129628,"journal":{"name":"Proceedings of the Audio Mostly 2018 on Sound in Immersion and Emotion","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130762375","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Emotional Musification 情感Musification
Pub Date : 2018-09-12 DOI: 10.1145/3243274.3243303
Andrew Godbout, Iulius A. T. Popa, J. Boyd
We present a method for emotional musification that utilizes the musical game MUSE. We take advantage of the strong links between music and emotion to represent emotions as music. While we provide a prototype for measuring emotion using facial expression and physiological signals our sonification is not dependent on this. Rather we identify states within MUSE that elicit certain emotions and map those onto the arousal and valence spatial representation of emotion. In this way our efforts are compatible with emotion detection methods which can be mapped to arousal and valence. Because MUSE is based on states and state transitions we gain the ability to transition seamlessly from one state to another as new emotions are detected thus avoiding abrupt changes between music types.
我们提出了一种利用音乐游戏MUSE的情感音乐化方法。我们利用音乐和情感之间的紧密联系,将情感表现为音乐。虽然我们提供了一个使用面部表情和生理信号来测量情绪的原型,但我们的超声技术并不依赖于此。相反,我们在MUSE中识别出引发某些情绪的状态,并将这些状态映射到情绪的唤醒和价态空间表征上。通过这种方式,我们的努力与可以映射到唤醒和效价的情绪检测方法是兼容的。因为MUSE是基于状态和状态转换的,我们获得了从一种状态无缝过渡到另一种状态的能力,因为新的情绪被检测到,从而避免了音乐类型之间的突然变化。
{"title":"Emotional Musification","authors":"Andrew Godbout, Iulius A. T. Popa, J. Boyd","doi":"10.1145/3243274.3243303","DOIUrl":"https://doi.org/10.1145/3243274.3243303","url":null,"abstract":"We present a method for emotional musification that utilizes the musical game MUSE. We take advantage of the strong links between music and emotion to represent emotions as music. While we provide a prototype for measuring emotion using facial expression and physiological signals our sonification is not dependent on this. Rather we identify states within MUSE that elicit certain emotions and map those onto the arousal and valence spatial representation of emotion. In this way our efforts are compatible with emotion detection methods which can be mapped to arousal and valence. Because MUSE is based on states and state transitions we gain the ability to transition seamlessly from one state to another as new emotions are detected thus avoiding abrupt changes between music types.","PeriodicalId":129628,"journal":{"name":"Proceedings of the Audio Mostly 2018 on Sound in Immersion and Emotion","volume":"63 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114065553","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Exploring the Creation of Useful Interfaces for Music Therapists 探索为音乐治疗师创造有用的界面
Pub Date : 2018-09-12 DOI: 10.1145/3243274.3243307
Leya Breanna Baltaxe-Admony, Tom Hope, Kentaro Watanabe, M. Teodorescu, S. Kurniawan, Takuichi Nishimura
Music therapy is utilized worldwide to connect communities, strengthen mental and physiological wellbeing, and provide new means of communication for individuals with phonological, social, language, and other communication disorders. The incorporation of technology into music therapy has many potential benefits. Existing research has been done in creating user-friendly devices for music therapy clients, but these technologies have not been utilized due to complications in use by the music therapists themselves. This paper reports the iterative prototype design of a compact and intuitive device designed in close collaboration with music therapists across the globe to promote the usefulness and usability of prototypes. The device features interchangeable interfaces for work with diverse populations. It is portable and hand-held. A device which incorporates these features does not yet exist. The outlined design specifications for this device were found using human centered design techniques and may be of significant use in designing other technologies in this field. Specifications were created throughout two design iterations and evaluations of the device. In an evaluation of the second iteration of this device it was found that 5/8 therapists wanted to incorporate it into their practices.
音乐疗法在世界范围内被用于连接社区,加强心理和生理健康,并为有语音,社会,语言和其他沟通障碍的个人提供新的沟通手段。科技与音乐疗法的结合有很多潜在的好处。现有的研究已经为音乐治疗客户创造了用户友好的设备,但由于音乐治疗师自己使用的并发症,这些技术还没有被利用。本文报道了一种紧凑而直观的设备的迭代原型设计,该设备是与全球音乐治疗师密切合作设计的,旨在提高原型的实用性和可用性。该设备具有可互换的接口,适用于不同人群。它是便携式和手持的。目前还不存在具有这些功能的设备。该设备的概要设计规范是使用以人为本的设计技术发现的,可能在设计该领域的其他技术方面具有重要意义。规格是在两次设计迭代和设备评估中创建的。在对该设备的第二次迭代的评估中,发现有5/8的治疗师希望将其纳入他们的实践中。
{"title":"Exploring the Creation of Useful Interfaces for Music Therapists","authors":"Leya Breanna Baltaxe-Admony, Tom Hope, Kentaro Watanabe, M. Teodorescu, S. Kurniawan, Takuichi Nishimura","doi":"10.1145/3243274.3243307","DOIUrl":"https://doi.org/10.1145/3243274.3243307","url":null,"abstract":"Music therapy is utilized worldwide to connect communities, strengthen mental and physiological wellbeing, and provide new means of communication for individuals with phonological, social, language, and other communication disorders. The incorporation of technology into music therapy has many potential benefits. Existing research has been done in creating user-friendly devices for music therapy clients, but these technologies have not been utilized due to complications in use by the music therapists themselves. This paper reports the iterative prototype design of a compact and intuitive device designed in close collaboration with music therapists across the globe to promote the usefulness and usability of prototypes. The device features interchangeable interfaces for work with diverse populations. It is portable and hand-held. A device which incorporates these features does not yet exist. The outlined design specifications for this device were found using human centered design techniques and may be of significant use in designing other technologies in this field. Specifications were created throughout two design iterations and evaluations of the device. In an evaluation of the second iteration of this device it was found that 5/8 therapists wanted to incorporate it into their practices.","PeriodicalId":129628,"journal":{"name":"Proceedings of the Audio Mostly 2018 on Sound in Immersion and Emotion","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124818378","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Lovelace's Legacy: Creative Algorithmic Interventions for Live Performance Lovelace的遗产:现场表演的创造性算法干预
Pub Date : 2018-09-12 DOI: 10.1145/3243274.3275380
D. D. Roure, P. Willcox, A. Chamberlain
We describe a series of informal exercises in which we have put algorithms in the hands of human performers in order to encourage a human creative response to mathematical and algorithmic input. These 'interventions' include a web-based app, experiments in physical space using Arduinos, and algorithmic augmentation of a keyboard.
我们描述了一系列非正式的练习,在这些练习中,我们将算法置于人类表演者的手中,以鼓励人类对数学和算法输入的创造性反应。这些“干预”包括一个基于网络的应用程序,使用Arduinos在物理空间进行的实验,以及键盘的算法增强。
{"title":"Lovelace's Legacy: Creative Algorithmic Interventions for Live Performance","authors":"D. D. Roure, P. Willcox, A. Chamberlain","doi":"10.1145/3243274.3275380","DOIUrl":"https://doi.org/10.1145/3243274.3275380","url":null,"abstract":"We describe a series of informal exercises in which we have put algorithms in the hands of human performers in order to encourage a human creative response to mathematical and algorithmic input. These 'interventions' include a web-based app, experiments in physical space using Arduinos, and algorithmic augmentation of a keyboard.","PeriodicalId":129628,"journal":{"name":"Proceedings of the Audio Mostly 2018 on Sound in Immersion and Emotion","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122184618","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Proceedings of the Audio Mostly 2018 on Sound in Immersion and Emotion
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1