The innovation in the new musical interfaces is largely driven by the ground up endeavors that introduce a level of redundancy. Inspired by the successes of iPhone and other industry innovations that were driven by iteration, consolidation, and scalability, we present a new interface for musical expression and discuss key elements of its implementation and integration into an existing and established laptop ensemble. In 2019, the Linux Laptop Orchestra of Virginia Tech (L2Ork) introduced the L2Orkmote, a custom reverse engineered variant of the Wii Remote and Nunchuk controller that reorganizes sensors and buttons using an additively manufactured housing. The goal was to equip each orchestra member with two of the newly designed L2Orkmotes, which resulted to the production of 40 L2Orkmotes. This large-scale production mandated software improvements, including the development of a robust API that can support such a large number of concurrently connected Bluetooth devices. Considering that new musical interfaces for musical expression (NIMEs) are rarely designed to scale, we report on the design. Additionally, we share the large-scale real-world deployment concurrently utilizing 28 L2Orkmotes, the supporting usability evaluation, and discuss the impact of scaling NIME production on its design.
{"title":"The impact of scaling the production of a new interface for musical expression on its design: a story of L2Orkmotes","authors":"Kyriakos D. Tsoukalas, J. Kubalak, I. Bukvic","doi":"10.1145/3411109.3411110","DOIUrl":"https://doi.org/10.1145/3411109.3411110","url":null,"abstract":"The innovation in the new musical interfaces is largely driven by the ground up endeavors that introduce a level of redundancy. Inspired by the successes of iPhone and other industry innovations that were driven by iteration, consolidation, and scalability, we present a new interface for musical expression and discuss key elements of its implementation and integration into an existing and established laptop ensemble. In 2019, the Linux Laptop Orchestra of Virginia Tech (L2Ork) introduced the L2Orkmote, a custom reverse engineered variant of the Wii Remote and Nunchuk controller that reorganizes sensors and buttons using an additively manufactured housing. The goal was to equip each orchestra member with two of the newly designed L2Orkmotes, which resulted to the production of 40 L2Orkmotes. This large-scale production mandated software improvements, including the development of a robust API that can support such a large number of concurrently connected Bluetooth devices. Considering that new musical interfaces for musical expression (NIMEs) are rarely designed to scale, we report on the design. Additionally, we share the large-scale real-world deployment concurrently utilizing 28 L2Orkmotes, the supporting usability evaluation, and discuss the impact of scaling NIME production on its design.","PeriodicalId":368424,"journal":{"name":"Proceedings of the 15th International Audio Mostly Conference","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122960290","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jing Yang, Felix Pfreundtner, Amit Barde, K. Heutschi, Gábor Sörös
Audio augmented reality (AAR) applications need to render virtual sounds with acoustic effects that match the real environment of the user to create an experience with strong sense of presence. This audio rendering process can be formulated as the convolution between the dry sound signal and the room impulse response (IR) that covers the audible frequency spectrum (20Hz - 20kHz). While the IR can be pre-calculated in virtual reality (VR) scenes, AR applications need to continuously estimate it. We propose a method to synthesize room IRs based on the corresponding IR in the ultrasound frequency band (20kHz - 22kHz) and two parameters we propose in this paper: slope factor and RT60 ratio. We assess the synthesized IRs using common acoustic metrics and we conducted a user study to evaluate participants' perceptual similarity between the sounds rendered with the synthesized IR and with the recorded IR in different rooms. The method requires only a small number of pre-measurements in the environment to determine the synthesis parameters and it uses only inaudible signals at runtime for fast IR synthesis, making it well suited for interactive AAR applications.
{"title":"Fast synthesis of perceptually adequate room impulse responses from ultrasonic measurements","authors":"Jing Yang, Felix Pfreundtner, Amit Barde, K. Heutschi, Gábor Sörös","doi":"10.1145/3411109.3412300","DOIUrl":"https://doi.org/10.1145/3411109.3412300","url":null,"abstract":"Audio augmented reality (AAR) applications need to render virtual sounds with acoustic effects that match the real environment of the user to create an experience with strong sense of presence. This audio rendering process can be formulated as the convolution between the dry sound signal and the room impulse response (IR) that covers the audible frequency spectrum (20Hz - 20kHz). While the IR can be pre-calculated in virtual reality (VR) scenes, AR applications need to continuously estimate it. We propose a method to synthesize room IRs based on the corresponding IR in the ultrasound frequency band (20kHz - 22kHz) and two parameters we propose in this paper: slope factor and RT60 ratio. We assess the synthesized IRs using common acoustic metrics and we conducted a user study to evaluate participants' perceptual similarity between the sounds rendered with the synthesized IR and with the recorded IR in different rooms. The method requires only a small number of pre-measurements in the environment to determine the synthesis parameters and it uses only inaudible signals at runtime for fast IR synthesis, making it well suited for interactive AAR applications.","PeriodicalId":368424,"journal":{"name":"Proceedings of the 15th International Audio Mostly Conference","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122205537","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Polyrhythms, Polymeters, and Polytempo are compositional techniques that describe pulses which are desynchronous between two or more sequences of music. Digital systems permit the sequencing of notes to a near-infinite degree of resolution, permitting an exponential number of complex rhythmic attributes in the music. Exploring such techniques within existing popular music sequencing software and notations can be challenging to generally work with and notate effectively. Step sequencers provide a simple and effective interface for exploring any arbitrary division of time into an even number of steps, with such interfaces easily expressible on grid based music controllers. The paper therefore has two differing but related outputs. Firstly, to demonstrate a framework for working with multiple physical grid controllers forming a larger unified grid, and provide a consolidated set of tools for programming music instruments for it. Secondly, to demonstrate how such a system provides a low-entry threshold for exploring Polyrhytms, Polymeters and Polytempo relationships using desynchronised step sequencers.
{"title":"Exploring polyrhythms, polymeters, and polytempi with the universal grid sequencer framework","authors":"Samuel J. Hunt","doi":"10.1145/3411109.3411122","DOIUrl":"https://doi.org/10.1145/3411109.3411122","url":null,"abstract":"Polyrhythms, Polymeters, and Polytempo are compositional techniques that describe pulses which are desynchronous between two or more sequences of music. Digital systems permit the sequencing of notes to a near-infinite degree of resolution, permitting an exponential number of complex rhythmic attributes in the music. Exploring such techniques within existing popular music sequencing software and notations can be challenging to generally work with and notate effectively. Step sequencers provide a simple and effective interface for exploring any arbitrary division of time into an even number of steps, with such interfaces easily expressible on grid based music controllers. The paper therefore has two differing but related outputs. Firstly, to demonstrate a framework for working with multiple physical grid controllers forming a larger unified grid, and provide a consolidated set of tools for programming music instruments for it. Secondly, to demonstrate how such a system provides a low-entry threshold for exploring Polyrhytms, Polymeters and Polytempo relationships using desynchronised step sequencers.","PeriodicalId":368424,"journal":{"name":"Proceedings of the 15th International Audio Mostly Conference","volume":"76 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129473395","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sound penetrates our outdoor spaces. Much of it we ignore amidst our fast passage from place to place, its qualities may be too quiet or fleeting to pay heed to above the bustle of our own thoughts, or we may experience the sounds as an annoyance. Manoeuvring our listening to be excited by its features is not so easy. This paper presents new artistic research that probes the hidden artefacts of everyday soundscapes - the sounds and details which we ignore or fail to engage - and draws them into a new audible reality. The work focuses on the affordances of spatial information in a novel combination of art and technology: site-specific composition and the ways of listening established by Schaeffer and his successors are combined with the technology of beam-forming from high resolution (Eigenmike) Ambisonics recordings, Ambisonics sound-field synthesis and the deployment of a new prototype loudspeaker. Underlying the artistic and scientific research is the hypothesis that spatially distributed information offers new opportunities to explore, isolate and musically develop features of interest, and that composition should address the same degree of spatiality as the real landscape. The work is part of the 'Reconfiguring the Landscape' project investigating how 3-D electroacoustic composition and sound-art can incite a new awareness of outdoor sound environments.
{"title":"Deepening presence: probing the hidden artefacts of everyday soundscapes","authors":"Natasha Barrett","doi":"10.1145/3411109.3411120","DOIUrl":"https://doi.org/10.1145/3411109.3411120","url":null,"abstract":"Sound penetrates our outdoor spaces. Much of it we ignore amidst our fast passage from place to place, its qualities may be too quiet or fleeting to pay heed to above the bustle of our own thoughts, or we may experience the sounds as an annoyance. Manoeuvring our listening to be excited by its features is not so easy. This paper presents new artistic research that probes the hidden artefacts of everyday soundscapes - the sounds and details which we ignore or fail to engage - and draws them into a new audible reality. The work focuses on the affordances of spatial information in a novel combination of art and technology: site-specific composition and the ways of listening established by Schaeffer and his successors are combined with the technology of beam-forming from high resolution (Eigenmike) Ambisonics recordings, Ambisonics sound-field synthesis and the deployment of a new prototype loudspeaker. Underlying the artistic and scientific research is the hypothesis that spatially distributed information offers new opportunities to explore, isolate and musically develop features of interest, and that composition should address the same degree of spatiality as the real landscape. The work is part of the 'Reconfiguring the Landscape' project investigating how 3-D electroacoustic composition and sound-art can incite a new awareness of outdoor sound environments.","PeriodicalId":368424,"journal":{"name":"Proceedings of the 15th International Audio Mostly Conference","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115780984","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Traditional Western musical instruments have evolved to be robust and predictable, responding consistently to the same player actions with the same musical response. Consequently, errors occurring in a performance scenario are typically attributed to the performer and thus a hallmark of musical accomplishment is a flawless musical rendition. Digital musical instruments often increase the potential for a second type of error as a result of technological failure within one or more components of the instrument. Gestural instruments using machine learning can be particularly susceptible to these types of error as recognition accuracy often falls short of 100%, making errors a familiar feature of gestural music performances. In this paper we refer to these technology-related errors as system errors, which can be difficult for players and audiences to disambiguate from performer errors. We conduct a pilot study in which participants repeat a note selection task in the presence of simulated system errors. The results suggest that, for the gestural music system under study, controlled increases in system error correspond to an increase in the occurrence and severity of performer error. Furthermore, we find the system errors reduce a performer's sense of control and result in the instrument being perceived as less accurate and less responsive.
{"title":"Was that me?: exploring the effects of error in gestural digital musical instruments","authors":"Dom Brown, C. Nash, Thomas J. Mitchell","doi":"10.1145/3411109.3411137","DOIUrl":"https://doi.org/10.1145/3411109.3411137","url":null,"abstract":"Traditional Western musical instruments have evolved to be robust and predictable, responding consistently to the same player actions with the same musical response. Consequently, errors occurring in a performance scenario are typically attributed to the performer and thus a hallmark of musical accomplishment is a flawless musical rendition. Digital musical instruments often increase the potential for a second type of error as a result of technological failure within one or more components of the instrument. Gestural instruments using machine learning can be particularly susceptible to these types of error as recognition accuracy often falls short of 100%, making errors a familiar feature of gestural music performances. In this paper we refer to these technology-related errors as system errors, which can be difficult for players and audiences to disambiguate from performer errors. We conduct a pilot study in which participants repeat a note selection task in the presence of simulated system errors. The results suggest that, for the gestural music system under study, controlled increases in system error correspond to an increase in the occurrence and severity of performer error. Furthermore, we find the system errors reduce a performer's sense of control and result in the instrument being perceived as less accurate and less responsive.","PeriodicalId":368424,"journal":{"name":"Proceedings of the 15th International Audio Mostly Conference","volume":"113 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123168529","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This article reports on a new library for the ScalaCollider and Sound Processes computer music environments, a translation and adaptation of the patterns subsystem known from SuperCollider. From the perspective of electroacoustic music, patterns can easily be overlooked by reducing their meaning to the production of "notes" in the manner of "algorithmic composition". However, we show that they can be understood as a particular kind of programming language, considering them as a domain specific language for structures inspired by collection processing. Using examples from SuperCollider created by Ron Kuivila during an artistic research residency embedded in our project Algorithms that Matter, we show the challenges in translating this system from one programming language with a particular set of paradigms to another. If this process is studied as a reconfiguration of an algorithmic ensemble, the translated system produces new usage scenarios hitherto not possible.
本文报告了一个用于ScalaCollider和Sound Processes计算机音乐环境的新库,它是SuperCollider中已知的模式子系统的翻译和改编。从电声音乐的角度来看,模式很容易被忽视,因为它们的意义被简化为以“算法作曲”的方式产生的“音符”。然而,我们表明它们可以被理解为一种特殊的编程语言,将它们视为受集合处理启发的结构的领域特定语言。使用Ron Kuivila在我们的项目Algorithms that Matter的艺术研究驻留期间创建的SuperCollider的例子,我们展示了将该系统从一种具有特定范例集的编程语言转换为另一种编程语言的挑战。如果把这个过程作为一个算法集合的重新配置来研究,翻译后的系统会产生迄今为止不可能的新使用场景。
{"title":"A pattern system for sound processes","authors":"Hanns Holger Rutz","doi":"10.1145/3411109.3411151","DOIUrl":"https://doi.org/10.1145/3411109.3411151","url":null,"abstract":"This article reports on a new library for the ScalaCollider and Sound Processes computer music environments, a translation and adaptation of the patterns subsystem known from SuperCollider. From the perspective of electroacoustic music, patterns can easily be overlooked by reducing their meaning to the production of \"notes\" in the manner of \"algorithmic composition\". However, we show that they can be understood as a particular kind of programming language, considering them as a domain specific language for structures inspired by collection processing. Using examples from SuperCollider created by Ron Kuivila during an artistic research residency embedded in our project Algorithms that Matter, we show the challenges in translating this system from one programming language with a particular set of paradigms to another. If this process is studied as a reconfiguration of an algorithmic ensemble, the translated system produces new usage scenarios hitherto not possible.","PeriodicalId":368424,"journal":{"name":"Proceedings of the 15th International Audio Mostly Conference","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122039326","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Music and background sound are often used in virtual realities for creating an emotional atmosphere. The present study investigates how music or an ambient soundscape influence presence, the feeling of "being there", as well as positive and negative affect. Fifty-one subjects participated, taking a stroll through a virtual park presented via a head-mounted display while they were walking on a treadmill. Sound was varied within subjects in four audio conditions: In a randomized sequence, participants experienced silence, a nature soundscape and music of positive or negative valence. In addition, time of day (daytime vs. nighttime walk) in the virtual environment was varied between subjects. Afterwards they were asked to rate their experience of presence and the positive and negative affect experienced. Results indicated that replaying any kind of sound lead to higher presence ratings compared to no sound at all, but there was no difference between playing a soundscape or music. Background music, however, tended to induce the expected emotions, though somewhat dependent on the musical pieces chosen. Further studies might evaluate whether it is possible to induce emotions through positive or negative (non-musical) soundscapes as well.
{"title":"The influence of mood induction by music or a soundscape on presence and emotions in a virtual reality park scenario","authors":"Angelika C. Kern, W. Ellermeier, Lina Jost","doi":"10.1145/3411109.3411129","DOIUrl":"https://doi.org/10.1145/3411109.3411129","url":null,"abstract":"Music and background sound are often used in virtual realities for creating an emotional atmosphere. The present study investigates how music or an ambient soundscape influence presence, the feeling of \"being there\", as well as positive and negative affect. Fifty-one subjects participated, taking a stroll through a virtual park presented via a head-mounted display while they were walking on a treadmill. Sound was varied within subjects in four audio conditions: In a randomized sequence, participants experienced silence, a nature soundscape and music of positive or negative valence. In addition, time of day (daytime vs. nighttime walk) in the virtual environment was varied between subjects. Afterwards they were asked to rate their experience of presence and the positive and negative affect experienced. Results indicated that replaying any kind of sound lead to higher presence ratings compared to no sound at all, but there was no difference between playing a soundscape or music. Background music, however, tended to induce the expected emotions, though somewhat dependent on the musical pieces chosen. Further studies might evaluate whether it is possible to induce emotions through positive or negative (non-musical) soundscapes as well.","PeriodicalId":368424,"journal":{"name":"Proceedings of the 15th International Audio Mostly Conference","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129643268","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
For virtual and augmented reality applications, it is desirable to render audio sources in the space the user is in, in real-time without sacrificing the perceptual quality of the sound. One aspect of the rendering that is perceptually important for a listener is the late-reverberation, or "echo", of the sound within a room environment. A popular method of generating a plausible late reverberation in realtime is the use of Feedback Delay Networks (FDN). However, its use has the drawback that it first has to be tuned (usually manually) for a particular room before the late-reverberation generated becomes perceptually accurate. In this paper, we propose a data-driven approach to automatically generate a pre-tuned FDN for any given room described by a set of room parameters. When combined with existing method for rendering the direct path and early reflections of a sound source, we demonstrate the feasibility of being able to render audio source in real-time for interactive applications.
{"title":"Data-driven feedback delay network construction for real-time virtual room acoustics","authors":"J. Shen, R. Duraiswami","doi":"10.1145/3411109.3411145","DOIUrl":"https://doi.org/10.1145/3411109.3411145","url":null,"abstract":"For virtual and augmented reality applications, it is desirable to render audio sources in the space the user is in, in real-time without sacrificing the perceptual quality of the sound. One aspect of the rendering that is perceptually important for a listener is the late-reverberation, or \"echo\", of the sound within a room environment. A popular method of generating a plausible late reverberation in realtime is the use of Feedback Delay Networks (FDN). However, its use has the drawback that it first has to be tuned (usually manually) for a particular room before the late-reverberation generated becomes perceptually accurate. In this paper, we propose a data-driven approach to automatically generate a pre-tuned FDN for any given room described by a set of room parameters. When combined with existing method for rendering the direct path and early reflections of a sound source, we demonstrate the feasibility of being able to render audio source in real-time for interactive applications.","PeriodicalId":368424,"journal":{"name":"Proceedings of the 15th International Audio Mostly Conference","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127297185","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Raul Masu, N. Correia, S. Jürgens, Jochen Feitsch, T. Romão
In this paper, we propose to consider the sonic interactions that occurs in a dance performance from an ecological perspective. In particular, we suggest using the conceptual models of artefact ecology and design space. As a case study, we present a work developed during a two weeks artistic residency in collaboration between a sound designer, one choreographer, and two dancers. During the residency both an interactive sound artefact based on a motion capture system, and a dance performance were developed. We present the ecology of an interactive sound artefact developed for the dance performance, with the objective to analyse how the ecology of multiple actors relate themselves to the interactive artefact.
{"title":"Designing interactive sonic artefacts for dance performance: an ecological approach","authors":"Raul Masu, N. Correia, S. Jürgens, Jochen Feitsch, T. Romão","doi":"10.1145/3411109.3412297","DOIUrl":"https://doi.org/10.1145/3411109.3412297","url":null,"abstract":"In this paper, we propose to consider the sonic interactions that occurs in a dance performance from an ecological perspective. In particular, we suggest using the conceptual models of artefact ecology and design space. As a case study, we present a work developed during a two weeks artistic residency in collaboration between a sound designer, one choreographer, and two dancers. During the residency both an interactive sound artefact based on a motion capture system, and a dance performance were developed. We present the ecology of an interactive sound artefact developed for the dance performance, with the objective to analyse how the ecology of multiple actors relate themselves to the interactive artefact.","PeriodicalId":368424,"journal":{"name":"Proceedings of the 15th International Audio Mostly Conference","volume":"76 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126248455","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, we discuss the design and implementation of a college-level course on immersive media at a performing arts institution. Focusing on the artistic applications of modern virtual reality technologies, the course aims to offer students a practice-based understanding of the concepts, tools and techniques involved in the design of audiovisual immersive systems and experiences. We describe the course structure and outline the intermixing of practical exercises with critical theory. We provide details of the design projects and discussion tasks assigned throughout the semester. We then discuss the outcome of a course evaluation session conducted with students. Finally, we identify the main challenges and opportunities for educators dealing with modern immersive media technologies with the hope that the findings offered in this paper can support the design and delivery of similar courses in a range of music and arts curricula.
{"title":"Teaching immersive media at the \"dawn of the new everything\"","authors":"Anil Çamci","doi":"10.1145/3411109.3411121","DOIUrl":"https://doi.org/10.1145/3411109.3411121","url":null,"abstract":"In this paper, we discuss the design and implementation of a college-level course on immersive media at a performing arts institution. Focusing on the artistic applications of modern virtual reality technologies, the course aims to offer students a practice-based understanding of the concepts, tools and techniques involved in the design of audiovisual immersive systems and experiences. We describe the course structure and outline the intermixing of practical exercises with critical theory. We provide details of the design projects and discussion tasks assigned throughout the semester. We then discuss the outcome of a course evaluation session conducted with students. Finally, we identify the main challenges and opportunities for educators dealing with modern immersive media technologies with the hope that the findings offered in this paper can support the design and delivery of similar courses in a range of music and arts curricula.","PeriodicalId":368424,"journal":{"name":"Proceedings of the 15th International Audio Mostly Conference","volume":"200 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121262621","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}