首页 > 最新文献

Proceedings of the 15th International Audio Mostly Conference最新文献

英文 中文
The impact of scaling the production of a new interface for musical expression on its design: a story of L2Orkmotes 一个新的音乐表达界面的规模化生产对其设计的影响:L2Orkmotes的一个故事
Pub Date : 2020-09-15 DOI: 10.1145/3411109.3411110
Kyriakos D. Tsoukalas, J. Kubalak, I. Bukvic
The innovation in the new musical interfaces is largely driven by the ground up endeavors that introduce a level of redundancy. Inspired by the successes of iPhone and other industry innovations that were driven by iteration, consolidation, and scalability, we present a new interface for musical expression and discuss key elements of its implementation and integration into an existing and established laptop ensemble. In 2019, the Linux Laptop Orchestra of Virginia Tech (L2Ork) introduced the L2Orkmote, a custom reverse engineered variant of the Wii Remote and Nunchuk controller that reorganizes sensors and buttons using an additively manufactured housing. The goal was to equip each orchestra member with two of the newly designed L2Orkmotes, which resulted to the production of 40 L2Orkmotes. This large-scale production mandated software improvements, including the development of a robust API that can support such a large number of concurrently connected Bluetooth devices. Considering that new musical interfaces for musical expression (NIMEs) are rarely designed to scale, we report on the design. Additionally, we share the large-scale real-world deployment concurrently utilizing 28 L2Orkmotes, the supporting usability evaluation, and discuss the impact of scaling NIME production on its design.
新音乐界面的创新很大程度上是由引入冗余程度的基础努力驱动的。受iPhone的成功和其他由迭代、整合和可扩展性驱动的行业创新的启发,我们提出了一个新的音乐表达界面,并讨论了其实现和集成到现有和已建立的笔记本电脑集成中的关键元素。2019年,弗吉尼亚理工大学的Linux笔记本电脑管弦乐队(L2Ork)推出了L2Orkmote,这是Wii遥控器和Nunchuk控制器的定制反向工程版本,它使用增材制造的外壳重新组织传感器和按钮。我们的目标是为每个乐团成员配备两台新设计的l2orkmote,结果生产了40台l2orkmote。这种大规模生产要求对软件进行改进,包括开发能够支持如此大量并发连接的蓝牙设备的健壮API。考虑到新的音乐表现音乐接口(尼姆)很少设计规模,我们报告的设计。此外,我们还分享了使用28个l2orkmote并发的大规模实际部署、支持性可用性评估,并讨论了扩展NIME产品对其设计的影响。
{"title":"The impact of scaling the production of a new interface for musical expression on its design: a story of L2Orkmotes","authors":"Kyriakos D. Tsoukalas, J. Kubalak, I. Bukvic","doi":"10.1145/3411109.3411110","DOIUrl":"https://doi.org/10.1145/3411109.3411110","url":null,"abstract":"The innovation in the new musical interfaces is largely driven by the ground up endeavors that introduce a level of redundancy. Inspired by the successes of iPhone and other industry innovations that were driven by iteration, consolidation, and scalability, we present a new interface for musical expression and discuss key elements of its implementation and integration into an existing and established laptop ensemble. In 2019, the Linux Laptop Orchestra of Virginia Tech (L2Ork) introduced the L2Orkmote, a custom reverse engineered variant of the Wii Remote and Nunchuk controller that reorganizes sensors and buttons using an additively manufactured housing. The goal was to equip each orchestra member with two of the newly designed L2Orkmotes, which resulted to the production of 40 L2Orkmotes. This large-scale production mandated software improvements, including the development of a robust API that can support such a large number of concurrently connected Bluetooth devices. Considering that new musical interfaces for musical expression (NIMEs) are rarely designed to scale, we report on the design. Additionally, we share the large-scale real-world deployment concurrently utilizing 28 L2Orkmotes, the supporting usability evaluation, and discuss the impact of scaling NIME production on its design.","PeriodicalId":368424,"journal":{"name":"Proceedings of the 15th International Audio Mostly Conference","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122960290","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fast synthesis of perceptually adequate room impulse responses from ultrasonic measurements 从超声波测量中快速合成感知上足够的房间脉冲响应
Pub Date : 2020-09-15 DOI: 10.1145/3411109.3412300
Jing Yang, Felix Pfreundtner, Amit Barde, K. Heutschi, Gábor Sörös
Audio augmented reality (AAR) applications need to render virtual sounds with acoustic effects that match the real environment of the user to create an experience with strong sense of presence. This audio rendering process can be formulated as the convolution between the dry sound signal and the room impulse response (IR) that covers the audible frequency spectrum (20Hz - 20kHz). While the IR can be pre-calculated in virtual reality (VR) scenes, AR applications need to continuously estimate it. We propose a method to synthesize room IRs based on the corresponding IR in the ultrasound frequency band (20kHz - 22kHz) and two parameters we propose in this paper: slope factor and RT60 ratio. We assess the synthesized IRs using common acoustic metrics and we conducted a user study to evaluate participants' perceptual similarity between the sounds rendered with the synthesized IR and with the recorded IR in different rooms. The method requires only a small number of pre-measurements in the environment to determine the synthesis parameters and it uses only inaudible signals at runtime for fast IR synthesis, making it well suited for interactive AAR applications.
音频增强现实(AAR)应用程序需要渲染具有与用户真实环境相匹配的声学效果的虚拟声音,以创造具有强烈存在感的体验。这个音频渲染过程可以被表述为干声信号和覆盖可听频谱(20Hz - 20kHz)的房间脉冲响应(IR)之间的卷积。虽然在虚拟现实(VR)场景中可以预先计算IR,但AR应用需要不断地估计它。我们提出了一种基于超声频段(20kHz - 22kHz)对应的红外和本文提出的两个参数:斜率因子和RT60比来合成房间红外的方法。我们使用常见的声学指标来评估合成红外,并进行了一项用户研究,以评估参与者在不同房间中使用合成红外和录制红外呈现的声音之间的感知相似性。该方法只需要在环境中进行少量的预测量来确定合成参数,并且在运行时仅使用不可听信号进行快速红外合成,使其非常适合交互式AAR应用。
{"title":"Fast synthesis of perceptually adequate room impulse responses from ultrasonic measurements","authors":"Jing Yang, Felix Pfreundtner, Amit Barde, K. Heutschi, Gábor Sörös","doi":"10.1145/3411109.3412300","DOIUrl":"https://doi.org/10.1145/3411109.3412300","url":null,"abstract":"Audio augmented reality (AAR) applications need to render virtual sounds with acoustic effects that match the real environment of the user to create an experience with strong sense of presence. This audio rendering process can be formulated as the convolution between the dry sound signal and the room impulse response (IR) that covers the audible frequency spectrum (20Hz - 20kHz). While the IR can be pre-calculated in virtual reality (VR) scenes, AR applications need to continuously estimate it. We propose a method to synthesize room IRs based on the corresponding IR in the ultrasound frequency band (20kHz - 22kHz) and two parameters we propose in this paper: slope factor and RT60 ratio. We assess the synthesized IRs using common acoustic metrics and we conducted a user study to evaluate participants' perceptual similarity between the sounds rendered with the synthesized IR and with the recorded IR in different rooms. The method requires only a small number of pre-measurements in the environment to determine the synthesis parameters and it uses only inaudible signals at runtime for fast IR synthesis, making it well suited for interactive AAR applications.","PeriodicalId":368424,"journal":{"name":"Proceedings of the 15th International Audio Mostly Conference","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122205537","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Exploring polyrhythms, polymeters, and polytempi with the universal grid sequencer framework 探索多节奏,多节拍,多节奏与通用网格测序框架
Pub Date : 2020-09-15 DOI: 10.1145/3411109.3411122
Samuel J. Hunt
Polyrhythms, Polymeters, and Polytempo are compositional techniques that describe pulses which are desynchronous between two or more sequences of music. Digital systems permit the sequencing of notes to a near-infinite degree of resolution, permitting an exponential number of complex rhythmic attributes in the music. Exploring such techniques within existing popular music sequencing software and notations can be challenging to generally work with and notate effectively. Step sequencers provide a simple and effective interface for exploring any arbitrary division of time into an even number of steps, with such interfaces easily expressible on grid based music controllers. The paper therefore has two differing but related outputs. Firstly, to demonstrate a framework for working with multiple physical grid controllers forming a larger unified grid, and provide a consolidated set of tools for programming music instruments for it. Secondly, to demonstrate how such a system provides a low-entry threshold for exploring Polyrhytms, Polymeters and Polytempo relationships using desynchronised step sequencers.
多节奏、多拍子和多节奏是描述两个或多个音乐序列之间不同步的脉冲的作曲技巧。数字系统允许音符排序到近乎无限的分辨率,允许音乐中复杂的节奏属性呈指数级增长。在现有的流行音乐排序软件和符号中探索这些技术可能具有挑战性,通常无法有效地使用和标记。步序器提供了一个简单而有效的接口,用于探索任何时间的任意划分为偶数步,这样的接口很容易在基于网格的音乐控制器上表达。因此,本文有两个不同但相关的输出。首先,演示了一个框架,用于与多个物理网格控制器一起工作,形成一个更大的统一网格,并提供了一套统一的工具,用于为其编程乐器。其次,演示如何这样的系统提供了一个低门槛的探索多节奏,多节拍和多节奏的关系使用不同步的步序。
{"title":"Exploring polyrhythms, polymeters, and polytempi with the universal grid sequencer framework","authors":"Samuel J. Hunt","doi":"10.1145/3411109.3411122","DOIUrl":"https://doi.org/10.1145/3411109.3411122","url":null,"abstract":"Polyrhythms, Polymeters, and Polytempo are compositional techniques that describe pulses which are desynchronous between two or more sequences of music. Digital systems permit the sequencing of notes to a near-infinite degree of resolution, permitting an exponential number of complex rhythmic attributes in the music. Exploring such techniques within existing popular music sequencing software and notations can be challenging to generally work with and notate effectively. Step sequencers provide a simple and effective interface for exploring any arbitrary division of time into an even number of steps, with such interfaces easily expressible on grid based music controllers. The paper therefore has two differing but related outputs. Firstly, to demonstrate a framework for working with multiple physical grid controllers forming a larger unified grid, and provide a consolidated set of tools for programming music instruments for it. Secondly, to demonstrate how such a system provides a low-entry threshold for exploring Polyrhytms, Polymeters and Polytempo relationships using desynchronised step sequencers.","PeriodicalId":368424,"journal":{"name":"Proceedings of the 15th International Audio Mostly Conference","volume":"76 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129473395","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deepening presence: probing the hidden artefacts of everyday soundscapes 深化存在:探索日常音景中隐藏的人工制品
Pub Date : 2020-09-15 DOI: 10.1145/3411109.3411120
Natasha Barrett
Sound penetrates our outdoor spaces. Much of it we ignore amidst our fast passage from place to place, its qualities may be too quiet or fleeting to pay heed to above the bustle of our own thoughts, or we may experience the sounds as an annoyance. Manoeuvring our listening to be excited by its features is not so easy. This paper presents new artistic research that probes the hidden artefacts of everyday soundscapes - the sounds and details which we ignore or fail to engage - and draws them into a new audible reality. The work focuses on the affordances of spatial information in a novel combination of art and technology: site-specific composition and the ways of listening established by Schaeffer and his successors are combined with the technology of beam-forming from high resolution (Eigenmike) Ambisonics recordings, Ambisonics sound-field synthesis and the deployment of a new prototype loudspeaker. Underlying the artistic and scientific research is the hypothesis that spatially distributed information offers new opportunities to explore, isolate and musically develop features of interest, and that composition should address the same degree of spatiality as the real landscape. The work is part of the 'Reconfiguring the Landscape' project investigating how 3-D electroacoustic composition and sound-art can incite a new awareness of outdoor sound environments.
声音穿透我们的户外空间。在我们从一个地方快速移动到另一个地方的过程中,我们忽略了其中的大部分,它的性质可能太安静或稍纵即逝,无法在我们自己思想的喧嚣之上加以注意,或者我们可能会把这些声音当作一种烦恼。操纵我们的听力,让它的特点使我们兴奋,并不是那么容易的。本文提出了一种新的艺术研究,探索日常声景中隐藏的人工制品——我们忽视或未能参与的声音和细节——并将它们引入一种新的可听现实。作品以艺术和技术的新颖结合关注空间信息的可得性:由Schaeffer和他的继任者建立的特定场地构图和聆听方式与高分辨率(特征麦克风)立体声录音的波束形成技术相结合,立体声声场合成和新原型扬声器的部署。艺术和科学研究的基础假设是,空间分布的信息为探索、分离和音乐发展感兴趣的特征提供了新的机会,并且构图应该处理与真实景观相同程度的空间性。该作品是“重新配置景观”项目的一部分,该项目旨在研究3d电声作曲和声音艺术如何激发人们对户外声音环境的新认识。
{"title":"Deepening presence: probing the hidden artefacts of everyday soundscapes","authors":"Natasha Barrett","doi":"10.1145/3411109.3411120","DOIUrl":"https://doi.org/10.1145/3411109.3411120","url":null,"abstract":"Sound penetrates our outdoor spaces. Much of it we ignore amidst our fast passage from place to place, its qualities may be too quiet or fleeting to pay heed to above the bustle of our own thoughts, or we may experience the sounds as an annoyance. Manoeuvring our listening to be excited by its features is not so easy. This paper presents new artistic research that probes the hidden artefacts of everyday soundscapes - the sounds and details which we ignore or fail to engage - and draws them into a new audible reality. The work focuses on the affordances of spatial information in a novel combination of art and technology: site-specific composition and the ways of listening established by Schaeffer and his successors are combined with the technology of beam-forming from high resolution (Eigenmike) Ambisonics recordings, Ambisonics sound-field synthesis and the deployment of a new prototype loudspeaker. Underlying the artistic and scientific research is the hypothesis that spatially distributed information offers new opportunities to explore, isolate and musically develop features of interest, and that composition should address the same degree of spatiality as the real landscape. The work is part of the 'Reconfiguring the Landscape' project investigating how 3-D electroacoustic composition and sound-art can incite a new awareness of outdoor sound environments.","PeriodicalId":368424,"journal":{"name":"Proceedings of the 15th International Audio Mostly Conference","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115780984","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Was that me?: exploring the effects of error in gestural digital musical instruments 那是我吗?探索手势数字乐器误差的影响
Pub Date : 2020-09-15 DOI: 10.1145/3411109.3411137
Dom Brown, C. Nash, Thomas J. Mitchell
Traditional Western musical instruments have evolved to be robust and predictable, responding consistently to the same player actions with the same musical response. Consequently, errors occurring in a performance scenario are typically attributed to the performer and thus a hallmark of musical accomplishment is a flawless musical rendition. Digital musical instruments often increase the potential for a second type of error as a result of technological failure within one or more components of the instrument. Gestural instruments using machine learning can be particularly susceptible to these types of error as recognition accuracy often falls short of 100%, making errors a familiar feature of gestural music performances. In this paper we refer to these technology-related errors as system errors, which can be difficult for players and audiences to disambiguate from performer errors. We conduct a pilot study in which participants repeat a note selection task in the presence of simulated system errors. The results suggest that, for the gestural music system under study, controlled increases in system error correspond to an increase in the occurrence and severity of performer error. Furthermore, we find the system errors reduce a performer's sense of control and result in the instrument being perceived as less accurate and less responsive.
传统的西方乐器已经演变成坚固和可预测的,对相同的玩家动作做出一致的音乐反应。因此,在表演场景中出现的错误通常归因于表演者,因此音乐成就的标志是完美无瑕的音乐表演。数字乐器往往增加潜在的第二类错误,由于技术故障在一个或多个部件的乐器。使用机器学习的手势乐器特别容易受到这些类型的错误的影响,因为识别准确率通常低于100%,这使得错误成为手势音乐表演的常见特征。在本文中,我们将这些与技术相关的错误称为系统错误,玩家和观众很难将其与表演者的错误区分开来。我们进行了一项试点研究,参与者在模拟系统错误的情况下重复音符选择任务。结果表明,对于所研究的手势音乐系统,系统误差的可控增加对应于表演者误差的发生和严重程度的增加。此外,我们发现系统误差降低了演奏者的控制感,导致乐器被认为不太准确和反应迟钝。
{"title":"Was that me?: exploring the effects of error in gestural digital musical instruments","authors":"Dom Brown, C. Nash, Thomas J. Mitchell","doi":"10.1145/3411109.3411137","DOIUrl":"https://doi.org/10.1145/3411109.3411137","url":null,"abstract":"Traditional Western musical instruments have evolved to be robust and predictable, responding consistently to the same player actions with the same musical response. Consequently, errors occurring in a performance scenario are typically attributed to the performer and thus a hallmark of musical accomplishment is a flawless musical rendition. Digital musical instruments often increase the potential for a second type of error as a result of technological failure within one or more components of the instrument. Gestural instruments using machine learning can be particularly susceptible to these types of error as recognition accuracy often falls short of 100%, making errors a familiar feature of gestural music performances. In this paper we refer to these technology-related errors as system errors, which can be difficult for players and audiences to disambiguate from performer errors. We conduct a pilot study in which participants repeat a note selection task in the presence of simulated system errors. The results suggest that, for the gestural music system under study, controlled increases in system error correspond to an increase in the occurrence and severity of performer error. Furthermore, we find the system errors reduce a performer's sense of control and result in the instrument being perceived as less accurate and less responsive.","PeriodicalId":368424,"journal":{"name":"Proceedings of the 15th International Audio Mostly Conference","volume":"113 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123168529","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
A pattern system for sound processes 声音处理的模式系统
Pub Date : 2020-09-15 DOI: 10.1145/3411109.3411151
Hanns Holger Rutz
This article reports on a new library for the ScalaCollider and Sound Processes computer music environments, a translation and adaptation of the patterns subsystem known from SuperCollider. From the perspective of electroacoustic music, patterns can easily be overlooked by reducing their meaning to the production of "notes" in the manner of "algorithmic composition". However, we show that they can be understood as a particular kind of programming language, considering them as a domain specific language for structures inspired by collection processing. Using examples from SuperCollider created by Ron Kuivila during an artistic research residency embedded in our project Algorithms that Matter, we show the challenges in translating this system from one programming language with a particular set of paradigms to another. If this process is studied as a reconfiguration of an algorithmic ensemble, the translated system produces new usage scenarios hitherto not possible.
本文报告了一个用于ScalaCollider和Sound Processes计算机音乐环境的新库,它是SuperCollider中已知的模式子系统的翻译和改编。从电声音乐的角度来看,模式很容易被忽视,因为它们的意义被简化为以“算法作曲”的方式产生的“音符”。然而,我们表明它们可以被理解为一种特殊的编程语言,将它们视为受集合处理启发的结构的领域特定语言。使用Ron Kuivila在我们的项目Algorithms that Matter的艺术研究驻留期间创建的SuperCollider的例子,我们展示了将该系统从一种具有特定范例集的编程语言转换为另一种编程语言的挑战。如果把这个过程作为一个算法集合的重新配置来研究,翻译后的系统会产生迄今为止不可能的新使用场景。
{"title":"A pattern system for sound processes","authors":"Hanns Holger Rutz","doi":"10.1145/3411109.3411151","DOIUrl":"https://doi.org/10.1145/3411109.3411151","url":null,"abstract":"This article reports on a new library for the ScalaCollider and Sound Processes computer music environments, a translation and adaptation of the patterns subsystem known from SuperCollider. From the perspective of electroacoustic music, patterns can easily be overlooked by reducing their meaning to the production of \"notes\" in the manner of \"algorithmic composition\". However, we show that they can be understood as a particular kind of programming language, considering them as a domain specific language for structures inspired by collection processing. Using examples from SuperCollider created by Ron Kuivila during an artistic research residency embedded in our project Algorithms that Matter, we show the challenges in translating this system from one programming language with a particular set of paradigms to another. If this process is studied as a reconfiguration of an algorithmic ensemble, the translated system produces new usage scenarios hitherto not possible.","PeriodicalId":368424,"journal":{"name":"Proceedings of the 15th International Audio Mostly Conference","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122039326","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
The influence of mood induction by music or a soundscape on presence and emotions in a virtual reality park scenario 在虚拟现实公园场景中,音乐或音景对存在感和情绪的情绪诱导的影响
Pub Date : 2020-09-15 DOI: 10.1145/3411109.3411129
Angelika C. Kern, W. Ellermeier, Lina Jost
Music and background sound are often used in virtual realities for creating an emotional atmosphere. The present study investigates how music or an ambient soundscape influence presence, the feeling of "being there", as well as positive and negative affect. Fifty-one subjects participated, taking a stroll through a virtual park presented via a head-mounted display while they were walking on a treadmill. Sound was varied within subjects in four audio conditions: In a randomized sequence, participants experienced silence, a nature soundscape and music of positive or negative valence. In addition, time of day (daytime vs. nighttime walk) in the virtual environment was varied between subjects. Afterwards they were asked to rate their experience of presence and the positive and negative affect experienced. Results indicated that replaying any kind of sound lead to higher presence ratings compared to no sound at all, but there was no difference between playing a soundscape or music. Background music, however, tended to induce the expected emotions, though somewhat dependent on the musical pieces chosen. Further studies might evaluate whether it is possible to induce emotions through positive or negative (non-musical) soundscapes as well.
在虚拟现实中,音乐和背景声音经常被用来创造一种情感氛围。本研究调查了音乐或环境音景如何影响在场,“在那里”的感觉,以及积极和消极的影响。51名受试者参与其中,他们在跑步机上行走时,通过头戴式显示器在虚拟公园里散步。在四种音频条件下,受试者的声音是不同的:在一个随机的顺序中,参与者经历了沉默,自然音景和积极或消极效价的音乐。此外,受试者在虚拟环境中的时间(白天与夜间行走)也各不相同。之后,他们被要求对他们的在场体验以及所经历的积极和消极影响进行评分。结果表明,与没有声音相比,重放任何一种声音都会导致更高的存在感评分,但播放音景和音乐之间没有区别。然而,背景音乐往往会诱发预期的情绪,尽管这在一定程度上取决于所选的音乐作品。进一步的研究可能会评估是否有可能通过积极的或消极的(非音乐的)音景来诱发情绪。
{"title":"The influence of mood induction by music or a soundscape on presence and emotions in a virtual reality park scenario","authors":"Angelika C. Kern, W. Ellermeier, Lina Jost","doi":"10.1145/3411109.3411129","DOIUrl":"https://doi.org/10.1145/3411109.3411129","url":null,"abstract":"Music and background sound are often used in virtual realities for creating an emotional atmosphere. The present study investigates how music or an ambient soundscape influence presence, the feeling of \"being there\", as well as positive and negative affect. Fifty-one subjects participated, taking a stroll through a virtual park presented via a head-mounted display while they were walking on a treadmill. Sound was varied within subjects in four audio conditions: In a randomized sequence, participants experienced silence, a nature soundscape and music of positive or negative valence. In addition, time of day (daytime vs. nighttime walk) in the virtual environment was varied between subjects. Afterwards they were asked to rate their experience of presence and the positive and negative affect experienced. Results indicated that replaying any kind of sound lead to higher presence ratings compared to no sound at all, but there was no difference between playing a soundscape or music. Background music, however, tended to induce the expected emotions, though somewhat dependent on the musical pieces chosen. Further studies might evaluate whether it is possible to induce emotions through positive or negative (non-musical) soundscapes as well.","PeriodicalId":368424,"journal":{"name":"Proceedings of the 15th International Audio Mostly Conference","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129643268","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Data-driven feedback delay network construction for real-time virtual room acoustics 实时虚拟房间声学的数据驱动反馈延迟网络构建
Pub Date : 2020-09-15 DOI: 10.1145/3411109.3411145
J. Shen, R. Duraiswami
For virtual and augmented reality applications, it is desirable to render audio sources in the space the user is in, in real-time without sacrificing the perceptual quality of the sound. One aspect of the rendering that is perceptually important for a listener is the late-reverberation, or "echo", of the sound within a room environment. A popular method of generating a plausible late reverberation in realtime is the use of Feedback Delay Networks (FDN). However, its use has the drawback that it first has to be tuned (usually manually) for a particular room before the late-reverberation generated becomes perceptually accurate. In this paper, we propose a data-driven approach to automatically generate a pre-tuned FDN for any given room described by a set of room parameters. When combined with existing method for rendering the direct path and early reflections of a sound source, we demonstrate the feasibility of being able to render audio source in real-time for interactive applications.
对于虚拟和增强现实应用,在不牺牲声音的感知质量的情况下,在用户所处的空间中实时渲染音源是可取的。对于听者来说,渲染的一个感知上很重要的方面是房间环境中声音的后期混响或“回声”。实时产生合理的延迟混响的一种流行方法是使用反馈延迟网络(FDN)。然而,它的使用有一个缺点,即在产生的晚混响变得感知准确之前,它首先必须为特定的房间进行调谐(通常是手动的)。在本文中,我们提出了一种数据驱动的方法来自动生成由一组房间参数描述的任何给定房间的预调谐FDN。当与现有的直接路径和声源早期反射的渲染方法相结合时,我们证明了能够为交互式应用实时渲染声源的可行性。
{"title":"Data-driven feedback delay network construction for real-time virtual room acoustics","authors":"J. Shen, R. Duraiswami","doi":"10.1145/3411109.3411145","DOIUrl":"https://doi.org/10.1145/3411109.3411145","url":null,"abstract":"For virtual and augmented reality applications, it is desirable to render audio sources in the space the user is in, in real-time without sacrificing the perceptual quality of the sound. One aspect of the rendering that is perceptually important for a listener is the late-reverberation, or \"echo\", of the sound within a room environment. A popular method of generating a plausible late reverberation in realtime is the use of Feedback Delay Networks (FDN). However, its use has the drawback that it first has to be tuned (usually manually) for a particular room before the late-reverberation generated becomes perceptually accurate. In this paper, we propose a data-driven approach to automatically generate a pre-tuned FDN for any given room described by a set of room parameters. When combined with existing method for rendering the direct path and early reflections of a sound source, we demonstrate the feasibility of being able to render audio source in real-time for interactive applications.","PeriodicalId":368424,"journal":{"name":"Proceedings of the 15th International Audio Mostly Conference","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127297185","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Designing interactive sonic artefacts for dance performance: an ecological approach 设计用于舞蹈表演的交互式声音人工制品:一种生态方法
Pub Date : 2020-09-15 DOI: 10.1145/3411109.3412297
Raul Masu, N. Correia, S. Jürgens, Jochen Feitsch, T. Romão
In this paper, we propose to consider the sonic interactions that occurs in a dance performance from an ecological perspective. In particular, we suggest using the conceptual models of artefact ecology and design space. As a case study, we present a work developed during a two weeks artistic residency in collaboration between a sound designer, one choreographer, and two dancers. During the residency both an interactive sound artefact based on a motion capture system, and a dance performance were developed. We present the ecology of an interactive sound artefact developed for the dance performance, with the objective to analyse how the ecology of multiple actors relate themselves to the interactive artefact.
在本文中,我们建议从生态学的角度来考虑舞蹈表演中发生的声音相互作用。我们特别建议使用人工生态和设计空间的概念模型。作为一个案例研究,我们展示了一件由一名声音设计师、一名编舞和两名舞者在两周的艺术驻留期间合作完成的作品。在驻留期间,基于动作捕捉系统的交互式声音人工制品和舞蹈表演都被开发出来。我们提出了为舞蹈表演开发的交互式声音人工制品的生态,目的是分析多个演员的生态如何将自己与交互式人工制品联系起来。
{"title":"Designing interactive sonic artefacts for dance performance: an ecological approach","authors":"Raul Masu, N. Correia, S. Jürgens, Jochen Feitsch, T. Romão","doi":"10.1145/3411109.3412297","DOIUrl":"https://doi.org/10.1145/3411109.3412297","url":null,"abstract":"In this paper, we propose to consider the sonic interactions that occurs in a dance performance from an ecological perspective. In particular, we suggest using the conceptual models of artefact ecology and design space. As a case study, we present a work developed during a two weeks artistic residency in collaboration between a sound designer, one choreographer, and two dancers. During the residency both an interactive sound artefact based on a motion capture system, and a dance performance were developed. We present the ecology of an interactive sound artefact developed for the dance performance, with the objective to analyse how the ecology of multiple actors relate themselves to the interactive artefact.","PeriodicalId":368424,"journal":{"name":"Proceedings of the 15th International Audio Mostly Conference","volume":"76 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126248455","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
Teaching immersive media at the "dawn of the new everything" 沉浸式媒体教学在“新一切的黎明”
Pub Date : 2020-09-15 DOI: 10.1145/3411109.3411121
Anil Çamci
In this paper, we discuss the design and implementation of a college-level course on immersive media at a performing arts institution. Focusing on the artistic applications of modern virtual reality technologies, the course aims to offer students a practice-based understanding of the concepts, tools and techniques involved in the design of audiovisual immersive systems and experiences. We describe the course structure and outline the intermixing of practical exercises with critical theory. We provide details of the design projects and discussion tasks assigned throughout the semester. We then discuss the outcome of a course evaluation session conducted with students. Finally, we identify the main challenges and opportunities for educators dealing with modern immersive media technologies with the hope that the findings offered in this paper can support the design and delivery of similar courses in a range of music and arts curricula.
在本文中,我们讨论了在一个表演艺术机构的沉浸式媒体大学水平课程的设计和实施。本课程侧重于现代虚拟现实技术的艺术应用,旨在为学生提供对视听沉浸式系统和体验设计中涉及的概念、工具和技术的基于实践的理解。我们描述了课程结构,并概述了实践练习与批判理论的混合。我们提供了整个学期分配的设计项目和讨论任务的细节。然后我们与学生讨论课程评估的结果。最后,我们确定了教育工作者处理现代沉浸式媒体技术的主要挑战和机遇,希望本文提供的研究结果可以支持一系列音乐和艺术课程中类似课程的设计和交付。
{"title":"Teaching immersive media at the \"dawn of the new everything\"","authors":"Anil Çamci","doi":"10.1145/3411109.3411121","DOIUrl":"https://doi.org/10.1145/3411109.3411121","url":null,"abstract":"In this paper, we discuss the design and implementation of a college-level course on immersive media at a performing arts institution. Focusing on the artistic applications of modern virtual reality technologies, the course aims to offer students a practice-based understanding of the concepts, tools and techniques involved in the design of audiovisual immersive systems and experiences. We describe the course structure and outline the intermixing of practical exercises with critical theory. We provide details of the design projects and discussion tasks assigned throughout the semester. We then discuss the outcome of a course evaluation session conducted with students. Finally, we identify the main challenges and opportunities for educators dealing with modern immersive media technologies with the hope that the findings offered in this paper can support the design and delivery of similar courses in a range of music and arts curricula.","PeriodicalId":368424,"journal":{"name":"Proceedings of the 15th International Audio Mostly Conference","volume":"200 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121262621","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
期刊
Proceedings of the 15th International Audio Mostly Conference
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1