首页 > 最新文献

Proceedings of the Audio Mostly 2018 on Sound in Immersion and Emotion最新文献

英文 中文
Subjective Evaluation of a Speech Emotion Recognition Interaction Framework 语音情感识别交互框架的主观评价
Pub Date : 2018-09-12 DOI: 10.1145/3243274.3243294
N. Vryzas, María Matsiola, Rigas Kotsakis, Charalampos A. Dimoulas, George M. Kalliris
In the current work, a conducted subjective evaluation of three basic components of a framework for applied Speech Emotion Recognition (SER) for theatrical performance and social media communication and interaction is presented. The multidisciplinary survey group used for the evaluation is consisted of participants with Theatrical and Performance Arts background, as well as Journalism and Mass Communications Studies. Initially, a publically available database of emotional speech utterances, Acted Emotional Speech Dynamic Database (AESDD) is evaluated. We examine the degree of agreement between the perceived emotion by the participants and the intended expressed emotion in the AESDD recordings. Furthermore, the participants are asked to choose between different coloured lighting of certain scenes captured on video. Correlations between the emotional content of the scenes and selected colors are observed and discussed. Finally, a prototype application for SER and multimodal speech emotion data gathering is evaluated in terms of Usefulness, Ease of Use, Ease of Learning and Satisfaction.
在目前的工作中,对戏剧表演和社交媒体交流与互动应用语音情感识别(SER)框架的三个基本组成部分进行了主观评估。用于评估的多学科调查小组由具有戏剧和表演艺术背景以及新闻和大众传播研究背景的参与者组成。首先,对一个公开可用的情绪言语数据库——行为情绪言语动态数据库(AESDD)进行了评估。我们检验了参与者感知到的情绪与AESDD记录中预期表达的情绪之间的一致程度。此外,参与者被要求在视频中拍摄的特定场景的不同颜色的灯光之间进行选择。观察并讨论了场景的情感内容与所选颜色之间的相关性。最后,从有用性、易用性、易学性和满意度等方面对一个基于多模态语音情感数据收集的SER原型应用程序进行了评估。
{"title":"Subjective Evaluation of a Speech Emotion Recognition Interaction Framework","authors":"N. Vryzas, María Matsiola, Rigas Kotsakis, Charalampos A. Dimoulas, George M. Kalliris","doi":"10.1145/3243274.3243294","DOIUrl":"https://doi.org/10.1145/3243274.3243294","url":null,"abstract":"In the current work, a conducted subjective evaluation of three basic components of a framework for applied Speech Emotion Recognition (SER) for theatrical performance and social media communication and interaction is presented. The multidisciplinary survey group used for the evaluation is consisted of participants with Theatrical and Performance Arts background, as well as Journalism and Mass Communications Studies. Initially, a publically available database of emotional speech utterances, Acted Emotional Speech Dynamic Database (AESDD) is evaluated. We examine the degree of agreement between the perceived emotion by the participants and the intended expressed emotion in the AESDD recordings. Furthermore, the participants are asked to choose between different coloured lighting of certain scenes captured on video. Correlations between the emotional content of the scenes and selected colors are observed and discussed. Finally, a prototype application for SER and multimodal speech emotion data gathering is evaluated in terms of Usefulness, Ease of Use, Ease of Learning and Satisfaction.","PeriodicalId":129628,"journal":{"name":"Proceedings of the Audio Mostly 2018 on Sound in Immersion and Emotion","volume":"344 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123355070","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Proceedings of the Audio Mostly 2018 on Sound in Immersion and Emotion 2018年音频会议论文集:沉浸和情感中的声音
{"title":"Proceedings of the Audio Mostly 2018 on Sound in Immersion and Emotion","authors":"","doi":"10.1145/3243274","DOIUrl":"https://doi.org/10.1145/3243274","url":null,"abstract":"","PeriodicalId":129628,"journal":{"name":"Proceedings of the Audio Mostly 2018 on Sound in Immersion and Emotion","volume":"94 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126079467","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
On Transformations between Paradigms in Audio Programming 论音频编程范式间的转换
Pub Date : 2018-09-12 DOI: 10.1145/3243274.3243298
R. Kraemer, Cornelius Pöpel
The research on paradigms in audio and music programming is an ongoing endeavor. However, although new audio programming paradigms have been created, already established paradigms did prevail and dominate major music production systems. Our research aims at the question, how programming paradigms and music production interacts. We describe the implementation process of an imperative algorithm calculating the greatest common divisor (gcd) in Pure Data and exemplify common problems of transformational processes between an imperative paradigm and a patch-paradigm. Having a closer look at related problems in research on programming paradigms in general, we raise the question of how constraints and boundaries of paradigms play a role in the design process of a program. With the deliberation on selected papers within the context of computer science, we give insight into different views of how the process of programming can be thought and how certain domains of application demand a specific paradigm.
音频和音乐编程范式的研究是一项持续的努力。然而,尽管新的音频编程范式已经被创造出来,但已经建立的范式确实盛行并主导了主要的音乐制作系统。我们的研究旨在解决编程范式和音乐制作如何相互作用的问题。我们描述了在Pure Data中计算最大公约数(gcd)的命令式算法的实现过程,并举例说明了命令式范式和补丁范式之间转换过程的常见问题。通过对编程范式研究中的相关问题的深入研究,我们提出了范式的约束和边界如何在程序的设计过程中发挥作用的问题。通过对计算机科学背景下选定的论文的审议,我们深入了解了编程过程的不同观点,以及某些应用领域如何需要特定的范式。
{"title":"On Transformations between Paradigms in Audio Programming","authors":"R. Kraemer, Cornelius Pöpel","doi":"10.1145/3243274.3243298","DOIUrl":"https://doi.org/10.1145/3243274.3243298","url":null,"abstract":"The research on paradigms in audio and music programming is an ongoing endeavor. However, although new audio programming paradigms have been created, already established paradigms did prevail and dominate major music production systems. Our research aims at the question, how programming paradigms and music production interacts. We describe the implementation process of an imperative algorithm calculating the greatest common divisor (gcd) in Pure Data and exemplify common problems of transformational processes between an imperative paradigm and a patch-paradigm. Having a closer look at related problems in research on programming paradigms in general, we raise the question of how constraints and boundaries of paradigms play a role in the design process of a program. With the deliberation on selected papers within the context of computer science, we give insight into different views of how the process of programming can be thought and how certain domains of application demand a specific paradigm.","PeriodicalId":129628,"journal":{"name":"Proceedings of the Audio Mostly 2018 on Sound in Immersion and Emotion","volume":"188 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114855520","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Re-Thinking Immersive Technologies for Audiences of the Future 为未来观众重新思考沉浸式技术
Pub Date : 2018-09-12 DOI: 10.1145/3243274.3275379
A. Chamberlain, S. Benford, A. Dix
This note introduces the notion of immersive technologies, accompanies a presentation and by starting to think about the nature of such systems we develop a position that questions existing preconceptions of immersive technologies. In order to accomplish this, we take a series of technologies that we have developed at the Mixed Reality Lab and present a vignette based on each of these technologies in order to stimulate debate and discussion at the workshop. Each of these technologies has its own particular qualities and are ideal for 'speculative' approaches to designing interactive possibilities. This short paper also starts to examine how qualitative approaches such as autoethnography can be used to understand and unpack our interaction and feelings about these technologies.
这篇文章介绍了沉浸式技术的概念,伴随着一场演讲,并开始思考这种系统的本质,我们提出了一个立场,质疑沉浸式技术现有的先入之见。为了实现这一目标,我们采用了我们在混合现实实验室开发的一系列技术,并基于这些技术展示了一个小短片,以激发研讨会上的辩论和讨论。每一种技术都有其独特的品质,是设计交互可能性的“推测”方法的理想选择。这篇短文也开始研究如何使用定性方法,如自我民族志,来理解和解开我们对这些技术的互动和感受。
{"title":"Re-Thinking Immersive Technologies for Audiences of the Future","authors":"A. Chamberlain, S. Benford, A. Dix","doi":"10.1145/3243274.3275379","DOIUrl":"https://doi.org/10.1145/3243274.3275379","url":null,"abstract":"This note introduces the notion of immersive technologies, accompanies a presentation and by starting to think about the nature of such systems we develop a position that questions existing preconceptions of immersive technologies. In order to accomplish this, we take a series of technologies that we have developed at the Mixed Reality Lab and present a vignette based on each of these technologies in order to stimulate debate and discussion at the workshop. Each of these technologies has its own particular qualities and are ideal for 'speculative' approaches to designing interactive possibilities. This short paper also starts to examine how qualitative approaches such as autoethnography can be used to understand and unpack our interaction and feelings about these technologies.","PeriodicalId":129628,"journal":{"name":"Proceedings of the Audio Mostly 2018 on Sound in Immersion and Emotion","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130143578","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Evolving in-game mood-expressive music with MetaCompose 使用MetaCompose改进游戏内的情绪表达音乐
Pub Date : 2018-09-12 DOI: 10.1145/3243274.3243292
Marco Scirea, Peter W. Eklund, J. Togelius, S. Risi
MetaCompose is a music generator based on a hybrid evolutionary technique that combines FI-2POP and multi-objective optimization. In this paper we employ the MetaCompose music generator to create music in real-time that expresses different mood-states in a game-playing environment (Checkers). In particular, this paper focuses on determining if differences in player experience can be observed when: (i) using affective-dynamic music compared to static music, and (ii) the music supports the game's internal narrative/state. Participants were tasked to play two games of Checkers while listening to two (out of three) different set-ups of game-related generated music. The possible set-ups were: static expression, consistent affective expression, and random affective expression. During game-play players wore a E4 Wristband, allowing various physiological measures to be recorded such as blood volume pulse (BVP) and electromyographic activity (EDA). The data collected confirms a hypothesis based on three out of four criteria (engagement, music quality, coherency with game excitement, and coherency with performance) that players prefer dynamic affective music when asked to reflect on the current game-state. In the future this system could allow designers/composers to easily create affective and dynamic soundtracks for interactive applications.
MetaCompose是一个基于FI-2POP和多目标优化相结合的混合进化技术的音乐生成器。在本文中,我们使用MetaCompose音乐生成器来实时创建音乐,以表达游戏环境(跳棋)中的不同情绪状态。本文特别关注的是,在以下情况下是否能够观察到玩家体验的差异:(1)与静态音乐相比,使用情感动态音乐;(2)音乐支持游戏的内部叙述/状态。参与者被要求玩两盘跳棋,同时听两种(三种中的两种)不同的与游戏相关的生成音乐。可能的设置有:静态表达、一致情感表达和随机情感表达。在游戏过程中,玩家佩戴E4腕带,可以记录各种生理指标,如血容量脉搏(BVP)和肌电活动(EDA)。收集到的数据证实了基于4个标准中的3个(游戏邦注:即用户粘性、音乐质量、与游戏兴奋感的一致性以及与表现的一致性)的假设,即当玩家被要求反映当前游戏状态时,他们更喜欢动态的情感音乐。在未来,这个系统可以让设计师/作曲家轻松地为交互式应用程序创建情感和动态音轨。
{"title":"Evolving in-game mood-expressive music with MetaCompose","authors":"Marco Scirea, Peter W. Eklund, J. Togelius, S. Risi","doi":"10.1145/3243274.3243292","DOIUrl":"https://doi.org/10.1145/3243274.3243292","url":null,"abstract":"MetaCompose is a music generator based on a hybrid evolutionary technique that combines FI-2POP and multi-objective optimization. In this paper we employ the MetaCompose music generator to create music in real-time that expresses different mood-states in a game-playing environment (Checkers). In particular, this paper focuses on determining if differences in player experience can be observed when: (i) using affective-dynamic music compared to static music, and (ii) the music supports the game's internal narrative/state. Participants were tasked to play two games of Checkers while listening to two (out of three) different set-ups of game-related generated music. The possible set-ups were: static expression, consistent affective expression, and random affective expression. During game-play players wore a E4 Wristband, allowing various physiological measures to be recorded such as blood volume pulse (BVP) and electromyographic activity (EDA). The data collected confirms a hypothesis based on three out of four criteria (engagement, music quality, coherency with game excitement, and coherency with performance) that players prefer dynamic affective music when asked to reflect on the current game-state. In the future this system could allow designers/composers to easily create affective and dynamic soundtracks for interactive applications.","PeriodicalId":129628,"journal":{"name":"Proceedings of the Audio Mostly 2018 on Sound in Immersion and Emotion","volume":"165 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126746122","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
Auditory Masking and the Precedence Effect in Studies of Musical Timekeeping 音乐计时研究中的听觉掩蔽与优先效应
Pub Date : 2018-09-12 DOI: 10.1145/3243274.3243312
Steffan Owens, Stuart Cunningham
Musical timekeeping is an important and evolving area of research with applications in a variety of music education and performance situations. Studies in this Iield are of ten concerned with being able to measure the accuracy or consistency of human participants, for whatever purpose is being investigated. Our initial explorations suggest that little has been done to consider the role that auditory masking, speciIically the precedence effect, plays in the study of human timekeeping tasks. In this paper, we highlight the importance of integrating masking into studies of timekeeping and suggest areas for discussion and future research, to address shortfalls in the literature.
音乐计时是一个重要且不断发展的研究领域,在各种音乐教育和表演场合都有应用。这一领域的研究通常涉及能够测量人类参与者的准确性或一致性,无论调查的目的是什么。我们的初步探索表明,很少有人考虑听觉掩蔽,特别是优先效应,在人类计时任务的研究中所起的作用。在本文中,我们强调了将掩蔽纳入计时研究的重要性,并提出了讨论和未来研究的领域,以解决文献中的不足之处。
{"title":"Auditory Masking and the Precedence Effect in Studies of Musical Timekeeping","authors":"Steffan Owens, Stuart Cunningham","doi":"10.1145/3243274.3243312","DOIUrl":"https://doi.org/10.1145/3243274.3243312","url":null,"abstract":"Musical timekeeping is an important and evolving area of research with applications in a variety of music education and performance situations. Studies in this Iield are of ten concerned with being able to measure the accuracy or consistency of human participants, for whatever purpose is being investigated. Our initial explorations suggest that little has been done to consider the role that auditory masking, speciIically the precedence effect, plays in the study of human timekeeping tasks. In this paper, we highlight the importance of integrating masking into studies of timekeeping and suggest areas for discussion and future research, to address shortfalls in the literature.","PeriodicalId":129628,"journal":{"name":"Proceedings of the Audio Mostly 2018 on Sound in Immersion and Emotion","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129043422","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The Design of Future Music Technologies: 'Sounding Out' AI, Immersive Experiences & Brain Controlled Interfaces 未来音乐技术的设计:“发声”AI、沉浸式体验和脑控界面
Pub Date : 2018-09-12 DOI: 10.1145/3243274.3243314
A. Chamberlain, Mads Bødker, Maria Kallionpää, Richard Ramchurn, D. D. Roure, S. Benford, A. Dix
This workshop examines the interplay between people, musical instruments, performance and technology. Now, more than ever technology is enabling us to augment the body, develop new ways to play and perform, and augment existing instruments that can span the physical and digital realms. By bringing together performers, artists, designers and researchers we aim to develop new understandings how we might design new performance technologies. Participants will be actively encouraged to participant, engaging with other workshop attendees to explore concepts such as; immersion, augmentation, emotion, physicality, data, improvisation, provenance, curation, context and temporality, and the ways that these might be employed and unpacked in respect to both performing and understanding interaction with new performance-based technologies that relate to the core themes of immersion and emotion.
这个工作坊探讨了人、乐器、表演和技术之间的相互作用。现在,科技比以往任何时候都更能让我们增强身体,开发新的演奏和表演方式,并增强现有的乐器,可以跨越物理和数字领域。通过将表演者,艺术家,设计师和研究人员聚集在一起,我们的目标是开发新的理解,我们如何设计新的表演技术。参与者将积极参与,与其他研讨会参与者一起探索概念,如;沉浸,增强,情感,物理,数据,即兴,来源,策展,背景和时间性,以及这些可能被使用和解压的方式,在表演和理解与沉浸和情感的核心主题相关的基于性能的新技术的互动方面。
{"title":"The Design of Future Music Technologies: 'Sounding Out' AI, Immersive Experiences & Brain Controlled Interfaces","authors":"A. Chamberlain, Mads Bødker, Maria Kallionpää, Richard Ramchurn, D. D. Roure, S. Benford, A. Dix","doi":"10.1145/3243274.3243314","DOIUrl":"https://doi.org/10.1145/3243274.3243314","url":null,"abstract":"This workshop examines the interplay between people, musical instruments, performance and technology. Now, more than ever technology is enabling us to augment the body, develop new ways to play and perform, and augment existing instruments that can span the physical and digital realms. By bringing together performers, artists, designers and researchers we aim to develop new understandings how we might design new performance technologies. Participants will be actively encouraged to participant, engaging with other workshop attendees to explore concepts such as; immersion, augmentation, emotion, physicality, data, improvisation, provenance, curation, context and temporality, and the ways that these might be employed and unpacked in respect to both performing and understanding interaction with new performance-based technologies that relate to the core themes of immersion and emotion.","PeriodicalId":129628,"journal":{"name":"Proceedings of the Audio Mostly 2018 on Sound in Immersion and Emotion","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131156717","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Smart Mandolin: autobiographical design, implementation, use cases, and lessons learned 智能曼陀林:自传式设计、实现、用例和经验教训
Pub Date : 2018-09-12 DOI: 10.1145/3243274.3243280
L. Turchet
This paper presents the Smart Mandolin, an exemplar of the family of the so-called smart instruments. Developed according to the paradigms of autobiographical design, it consists of a conventional acoustic mandolin enhanced with different types of sensors, a microphone, a loudspeaker, wireless connectivity to both local networks and the Internet, and a low-latency audio processing board. Various implemented use cases are presented, which leverage the smart qualities of the instrument. These include the programming of the instrument via applications for smartphones and desktop computer, as well as the wireless control of devices enabling multimodal performances such as screen projecting visuals, smartphones, and tactile devices used by the audience. The paper concludes with an evaluation conducted by the author himself after extensive use, which pinpointed pros and cons of the instrument and provided a comparison with the Hyper-Mandolin, an instance of augmented instruments previously developed by the author.
本文介绍了智能曼陀林,一个典型的家庭所谓的智能乐器。根据自传式设计的范例开发,它包括一个传统的声学曼陀林,增强了不同类型的传感器,一个麦克风,一个扬声器,无线连接到本地网络和互联网,以及一个低延迟的音频处理板。提出了各种实现的用例,这些用例利用了仪器的智能特性。其中包括通过智能手机和台式电脑的应用程序对乐器进行编程,以及对设备的无线控制,这些设备可以实现多模式表演,如屏幕投影视觉效果、智能手机和观众使用的触觉设备。最后,作者在大量使用后进行了自己的评价,指出了该乐器的优缺点,并与作者之前开发的增强乐器Hyper-Mandolin进行了比较。
{"title":"Smart Mandolin: autobiographical design, implementation, use cases, and lessons learned","authors":"L. Turchet","doi":"10.1145/3243274.3243280","DOIUrl":"https://doi.org/10.1145/3243274.3243280","url":null,"abstract":"This paper presents the Smart Mandolin, an exemplar of the family of the so-called smart instruments. Developed according to the paradigms of autobiographical design, it consists of a conventional acoustic mandolin enhanced with different types of sensors, a microphone, a loudspeaker, wireless connectivity to both local networks and the Internet, and a low-latency audio processing board. Various implemented use cases are presented, which leverage the smart qualities of the instrument. These include the programming of the instrument via applications for smartphones and desktop computer, as well as the wireless control of devices enabling multimodal performances such as screen projecting visuals, smartphones, and tactile devices used by the audience. The paper concludes with an evaluation conducted by the author himself after extensive use, which pinpointed pros and cons of the instrument and provided a comparison with the Hyper-Mandolin, an instance of augmented instruments previously developed by the author.","PeriodicalId":129628,"journal":{"name":"Proceedings of the Audio Mostly 2018 on Sound in Immersion and Emotion","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114419582","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 28
Designing Musical Soundtracks for Brain Controlled Interface (BCI) Systems 为脑控接口(BCI)系统设计音乐原声
Pub Date : 2018-09-12 DOI: 10.1145/3243274.3243288
Richard Ramchurn, A. Chamberlain, S. Benford
This paper presents research based on the creation and development of two Brain Controlled Interface (BCI) based film experiences. The focus of this research is primarily on the audio in the films; the way that the overall experiences were designed, the ways in which the soundtracks were specifically developed for the experiences and the ways in which the audience perceived the use of the soundtrack in the film. Unlike traditional soundtracks the adaptive nature of the audio means that there are multiple parts that can be interacted with and combined at specific moments. The design of such adaptive audio systems is something that is yet to be fully understood and this paper goes someway to presenting our initial findings. We think that this research will be of interest and excite the Audio-HCI community.
本文介绍了基于两个脑控接口(BCI)的电影体验的创建和发展的研究。本研究的重点主要集中在电影中的音频;整体体验的设计方式,为体验专门开发的配乐方式,以及观众对电影中配乐使用的感受。与传统音轨不同的是,音频的自适应特性意味着有多个部分可以在特定时刻进行交互和组合。这种自适应音频系统的设计还有待充分理解,本文在某种程度上展示了我们的初步发现。我们认为这项研究将会引起音频- hci社区的兴趣和兴趣。
{"title":"Designing Musical Soundtracks for Brain Controlled Interface (BCI) Systems","authors":"Richard Ramchurn, A. Chamberlain, S. Benford","doi":"10.1145/3243274.3243288","DOIUrl":"https://doi.org/10.1145/3243274.3243288","url":null,"abstract":"This paper presents research based on the creation and development of two Brain Controlled Interface (BCI) based film experiences. The focus of this research is primarily on the audio in the films; the way that the overall experiences were designed, the ways in which the soundtracks were specifically developed for the experiences and the ways in which the audience perceived the use of the soundtrack in the film. Unlike traditional soundtracks the adaptive nature of the audio means that there are multiple parts that can be interacted with and combined at specific moments. The design of such adaptive audio systems is something that is yet to be fully understood and this paper goes someway to presenting our initial findings. We think that this research will be of interest and excite the Audio-HCI community.","PeriodicalId":129628,"journal":{"name":"Proceedings of the Audio Mostly 2018 on Sound in Immersion and Emotion","volume":"70 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126525666","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
A Prototype Mixer to Improve Cross-Modal Attention During Audio Mixing 一个原型混音器,以提高跨模态注意力在音频混音
Pub Date : 2018-09-12 DOI: 10.1145/3243274.3243290
Josh Mycroft, T. Stockman, J. Reiss
The Channel Strip mixer found on physical mixing desks is the primary Graphical User Interface design for most Digital Audio Workstations. While this metaphor provides transferable knowledge from hardware, there may be a risk that it does not always translate well into screen-based mixers. For example, the need to search through several windows of mix information may inhibit the engagement and 'flow' of the mixing process, and the subsequent screen management required to access the mixer across multiple windows can place high cognitive load on working memory and overload the limited capacity of the visual mechanism. This paper trials an eight-channel proto-type mixer which uses a novel approach to the mixer design to address these issues. The mixer uses an overview of the visual interface and employs multivariate data objects for channel parameters which can be filtered by the user. Our results suggest that this design, by reducing both the complexity of visual search and the amount of visual feedback on the screen at any one time, leads to improved results in terms of visual search, critical listening and mixing workflow.
在物理混音台上发现的通道条混音器是大多数数字音频工作站的主要图形用户界面设计。虽然这个比喻提供了来自硬件的可转移知识,但它可能存在一个风险,即它并不总是很好地转化为基于屏幕的混频器。例如,需要在多个窗口中搜索混合信息可能会抑制混合过程的参与和“流动”,随后需要跨多个窗口访问搅拌器的屏幕管理可能会给工作记忆带来很高的认知负荷,并使有限的视觉机制容量过载。本文试验了一个八通道原型混频器,它使用了一种新颖的混频器设计方法来解决这些问题。混合器使用可视化界面的概览,并为通道参数使用多变量数据对象,这些数据对象可由用户过滤。我们的研究结果表明,这种设计,通过减少视觉搜索的复杂性和在任何时候屏幕上的视觉反馈的数量,导致在视觉搜索,关键的听力和混音工作流程方面的改进结果。
{"title":"A Prototype Mixer to Improve Cross-Modal Attention During Audio Mixing","authors":"Josh Mycroft, T. Stockman, J. Reiss","doi":"10.1145/3243274.3243290","DOIUrl":"https://doi.org/10.1145/3243274.3243290","url":null,"abstract":"The Channel Strip mixer found on physical mixing desks is the primary Graphical User Interface design for most Digital Audio Workstations. While this metaphor provides transferable knowledge from hardware, there may be a risk that it does not always translate well into screen-based mixers. For example, the need to search through several windows of mix information may inhibit the engagement and 'flow' of the mixing process, and the subsequent screen management required to access the mixer across multiple windows can place high cognitive load on working memory and overload the limited capacity of the visual mechanism. This paper trials an eight-channel proto-type mixer which uses a novel approach to the mixer design to address these issues. The mixer uses an overview of the visual interface and employs multivariate data objects for channel parameters which can be filtered by the user. Our results suggest that this design, by reducing both the complexity of visual search and the amount of visual feedback on the screen at any one time, leads to improved results in terms of visual search, critical listening and mixing workflow.","PeriodicalId":129628,"journal":{"name":"Proceedings of the Audio Mostly 2018 on Sound in Immersion and Emotion","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116625006","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
期刊
Proceedings of the Audio Mostly 2018 on Sound in Immersion and Emotion
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1