This paper presents an Internet of Audio Things ecosystem devised to support soundscape composition via vocal interactions. The ecosystem involves a commercial voice-based interface and the cloud-based repository of audio content Freesound.org. The user-system interactions are exclusively based on vocal input/outputs, and differ from the conventional methods for retrieval and sound editing which involve a browser and programs running on a desktop PC. The developed ecosystem targets sound designers interested in soundscape composition and in particular the visually-impaired ones, with the aim of making the soundscape composition practice more accessible. We report the results of a user study conducted with twelve participants. Overall, results show that the interface was found usable and was deemed easy to use and to learn. Participants reported to have enjoyed using the system and generally felt that it was effective in supporting their creativity during the process of composing a soundscape.
{"title":"Voice-based interface for accessible soundscape composition: composing soundscapes by vocally querying online sounds repositories","authors":"L. Turchet, Alex Zanetti","doi":"10.1145/3411109.3411113","DOIUrl":"https://doi.org/10.1145/3411109.3411113","url":null,"abstract":"This paper presents an Internet of Audio Things ecosystem devised to support soundscape composition via vocal interactions. The ecosystem involves a commercial voice-based interface and the cloud-based repository of audio content Freesound.org. The user-system interactions are exclusively based on vocal input/outputs, and differ from the conventional methods for retrieval and sound editing which involve a browser and programs running on a desktop PC. The developed ecosystem targets sound designers interested in soundscape composition and in particular the visually-impaired ones, with the aim of making the soundscape composition practice more accessible. We report the results of a user study conducted with twelve participants. Overall, results show that the interface was found usable and was deemed easy to use and to learn. Participants reported to have enjoyed using the system and generally felt that it was effective in supporting their creativity during the process of composing a soundscape.","PeriodicalId":368424,"journal":{"name":"Proceedings of the 15th International Audio Mostly Conference","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133421858","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mariana Seiça, Licinio Gomes Roque, P. Martins, F. A. Cardoso
The design of auditory artefacts has been establishing its practice as a scientific area for more than 20 years, with a crucial element in this process being how to properly evaluate acoustic outputs. In this paper, we sought to map the evaluation methods applied in a general search inside two main audio-focused conferences: Audio Mostly and the International Conference on Auditory Display (ICAD). Revisiting last year's editions, as well as a keyword-based search in the last ten years, we attempted to gather and classify each evaluation method according to the level of user involvement, their role, and the authors intentions in using each method. We propose an initial mapping for this gathering, in a framework of evaluation approaches which can reinforce and expand current practices in the creation of auditory artefacts.
{"title":"Contrasts and similarities between two audio research communities in evaluating auditory artefacts","authors":"Mariana Seiça, Licinio Gomes Roque, P. Martins, F. A. Cardoso","doi":"10.1145/3411109.3411146","DOIUrl":"https://doi.org/10.1145/3411109.3411146","url":null,"abstract":"The design of auditory artefacts has been establishing its practice as a scientific area for more than 20 years, with a crucial element in this process being how to properly evaluate acoustic outputs. In this paper, we sought to map the evaluation methods applied in a general search inside two main audio-focused conferences: Audio Mostly and the International Conference on Auditory Display (ICAD). Revisiting last year's editions, as well as a keyword-based search in the last ten years, we attempted to gather and classify each evaluation method according to the level of user involvement, their role, and the authors intentions in using each method. We propose an initial mapping for this gathering, in a framework of evaluation approaches which can reinforce and expand current practices in the creation of auditory artefacts.","PeriodicalId":368424,"journal":{"name":"Proceedings of the 15th International Audio Mostly Conference","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117081664","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In musical acoustics, wave propagation, reflection, phase inversion, and boundary conditions can be hard to conceptualize. Physical kinetic wave demonstrations offer visible and tangible experiences of wave behavior and facilitate active learning. We implement such kinetic demonstrations, a long spring and a Shive machine, using contemporary fabrication techniques. Furthermore, we employ motion capture (MoCap) technology to transform these kinetic assemblies into audio controllers. Time-varying coordinates of Mo-Cap markers integrated into the assemblies are mapped to audio parameters, closing a multi-sensory loop where visual analogues of acoustic phenomena are in turn used to control digital audio. The project leads to a pedagogical practice where fabrication and sensing technologies are used to reconstitute demonstrations for the eye as controllers for the ear.
{"title":"Capturing kinetic wave demonstrations for sound control","authors":"J. Granzow, Matias Vilaplana, Anil Çamci","doi":"10.1145/3411109.3411150","DOIUrl":"https://doi.org/10.1145/3411109.3411150","url":null,"abstract":"In musical acoustics, wave propagation, reflection, phase inversion, and boundary conditions can be hard to conceptualize. Physical kinetic wave demonstrations offer visible and tangible experiences of wave behavior and facilitate active learning. We implement such kinetic demonstrations, a long spring and a Shive machine, using contemporary fabrication techniques. Furthermore, we employ motion capture (MoCap) technology to transform these kinetic assemblies into audio controllers. Time-varying coordinates of Mo-Cap markers integrated into the assemblies are mapped to audio parameters, closing a multi-sensory loop where visual analogues of acoustic phenomena are in turn used to control digital audio. The project leads to a pedagogical practice where fabrication and sensing technologies are used to reconstitute demonstrations for the eye as controllers for the ear.","PeriodicalId":368424,"journal":{"name":"Proceedings of the 15th International Audio Mostly Conference","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125994320","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Thomas J. Mitchell, Alex J. Jones, Michael B. O'Connor, Mark D. Wonnacott, D. Glowacki, J. Hyde
Scientists increasingly rely on computational models of atoms and molecules to observe, understand and make predictions about the microscopic world. Atoms and molecules are in constant motion, with vibrations and structural fluctuations occurring at very short time-scales and corresponding length-scales. But can these microscopic oscillations be converted into sound? And, what would they sound like? In this paper we present our initial steps towards a generalised approach for sonifying data produced by a real-time molecular dynamics simulation. The approach uses scanned synthesis to translate real-time geometric simulation data into audio. The process is embedded within a stand alone application as well as a variety of audio plugin formats to enable the process to be used as an audio synthesis method for music making. We review the relevant background literature before providing an overview of our system. Simulations of three molecules are then considered: 17-alanine, graphene and a carbon nanotube. Four examples are then provided demonstrating how the technique maps molecular features and parameters onto the auditory character of the resulting sound. A case study is then provided in which the sonification/synthesis method is used within a musical composition.
{"title":"Towards molecular musical instruments: interactive sonifications of 17-alanine, graphene and carbon nanotubes","authors":"Thomas J. Mitchell, Alex J. Jones, Michael B. O'Connor, Mark D. Wonnacott, D. Glowacki, J. Hyde","doi":"10.1145/3411109.3411143","DOIUrl":"https://doi.org/10.1145/3411109.3411143","url":null,"abstract":"Scientists increasingly rely on computational models of atoms and molecules to observe, understand and make predictions about the microscopic world. Atoms and molecules are in constant motion, with vibrations and structural fluctuations occurring at very short time-scales and corresponding length-scales. But can these microscopic oscillations be converted into sound? And, what would they sound like? In this paper we present our initial steps towards a generalised approach for sonifying data produced by a real-time molecular dynamics simulation. The approach uses scanned synthesis to translate real-time geometric simulation data into audio. The process is embedded within a stand alone application as well as a variety of audio plugin formats to enable the process to be used as an audio synthesis method for music making. We review the relevant background literature before providing an overview of our system. Simulations of three molecules are then considered: 17-alanine, graphene and a carbon nanotube. Four examples are then provided demonstrating how the technique maps molecular features and parameters onto the auditory character of the resulting sound. A case study is then provided in which the sonification/synthesis method is used within a musical composition.","PeriodicalId":368424,"journal":{"name":"Proceedings of the 15th International Audio Mostly Conference","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116693886","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present a case-study in the development of a"hyperscanning" auditory interface that transforms realtime brainwave-similarity between interacting dyads into music. Our instrument extends reality in face-to-face communication with a musical stream reflecting an invisible socio-neurophysiological signal. This instrument contributes to the historical context of brain-computer interfaces (BCIs) applied to art and music, but is unique because it is contingent on the correlation between the brainwaves of the dyad, and because it conveys this information using entirely auditory feedback. We designed the instrument to be i) easy to understand, ii) relatable and iii) pleasant for members of the general public in an exhibition context. We present how this context and user group led to our choice of EEG hardware, inter-brain similarity metric, and our auditory mapping strategy. We discuss our experience following four public exhibitions, as well as future improvements to the instrument design and user experience.
{"title":"An auditory interface for realtime brainwave similarity in dyads","authors":"R. M. Winters, Stephanie Koziej","doi":"10.1145/3411109.3411147","DOIUrl":"https://doi.org/10.1145/3411109.3411147","url":null,"abstract":"We present a case-study in the development of a\"hyperscanning\" auditory interface that transforms realtime brainwave-similarity between interacting dyads into music. Our instrument extends reality in face-to-face communication with a musical stream reflecting an invisible socio-neurophysiological signal. This instrument contributes to the historical context of brain-computer interfaces (BCIs) applied to art and music, but is unique because it is contingent on the correlation between the brainwaves of the dyad, and because it conveys this information using entirely auditory feedback. We designed the instrument to be i) easy to understand, ii) relatable and iii) pleasant for members of the general public in an exhibition context. We present how this context and user group led to our choice of EEG hardware, inter-brain similarity metric, and our auditory mapping strategy. We discuss our experience following four public exhibitions, as well as future improvements to the instrument design and user experience.","PeriodicalId":368424,"journal":{"name":"Proceedings of the 15th International Audio Mostly Conference","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125846636","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sound design is increasingly diversifying into many areas beyond its traditional domains in film, television, radio or theatre. This leads to sound designers being confronted with a multitude of design and development processes. The related methodologies have an impact on how problems are framed and what is considered an ideal path to achieve their solutions. From this a need for an educated discourse in sound design education and professional practice arises. This article investigates the creative process from the perspective of an emerging generation of sound designers. The first part of the paper outlines concepts and models of the design process in various fields of practice. The second part is devoted to an interpretive comparative analysis of sound design process visualizations created by sound design students with a professional background. Apart from gaining a better understanding of the creative process of the sound designers, the goal of this work is to contribute to a better integration of the sound design craft into contemporary design process methodologies, ultimately leading to an empowerment of the sound designer in complex, dynamic and interdisciplinary project settings.
{"title":"How do you sound design?: an exploratory investigation of sound design process visualizations","authors":"D. Hug","doi":"10.1145/3411109.3411144","DOIUrl":"https://doi.org/10.1145/3411109.3411144","url":null,"abstract":"Sound design is increasingly diversifying into many areas beyond its traditional domains in film, television, radio or theatre. This leads to sound designers being confronted with a multitude of design and development processes. The related methodologies have an impact on how problems are framed and what is considered an ideal path to achieve their solutions. From this a need for an educated discourse in sound design education and professional practice arises. This article investigates the creative process from the perspective of an emerging generation of sound designers. The first part of the paper outlines concepts and models of the design process in various fields of practice. The second part is devoted to an interpretive comparative analysis of sound design process visualizations created by sound design students with a professional background. Apart from gaining a better understanding of the creative process of the sound designers, the goal of this work is to contribute to a better integration of the sound design craft into contemporary design process methodologies, ultimately leading to an empowerment of the sound designer in complex, dynamic and interdisciplinary project settings.","PeriodicalId":368424,"journal":{"name":"Proceedings of the 15th International Audio Mostly Conference","volume":"110 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132530113","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper we propose a reduced reality concept of less-is-more that VR designers can use to create technological frameworks that reduce sensory overload and allow for better concentration and focus, less stress, and novel scenarios. We question the approach taken by scholars in the field of XR research, where the focus is typically to design and use technology that adds sensory information to the user's perceptual field and we address some of the confusion related to the typical uses of the term reality. To address the latter terminological muddle, we define reality as our conscious experience of the environment as emergent perception and we use this definition as the basis for a discussion of the role of sound in balancing sensory information and in the construction of a less cluttered and less stressful perceptual environments.
{"title":"Don't extend! reduce!: the sound approach to reality","authors":"Mads Walther-Hansen, M. Grimshaw-Aagaard","doi":"10.1145/3411109.3411111","DOIUrl":"https://doi.org/10.1145/3411109.3411111","url":null,"abstract":"In this paper we propose a reduced reality concept of less-is-more that VR designers can use to create technological frameworks that reduce sensory overload and allow for better concentration and focus, less stress, and novel scenarios. We question the approach taken by scholars in the field of XR research, where the focus is typically to design and use technology that adds sensory information to the user's perceptual field and we address some of the confusion related to the typical uses of the term reality. To address the latter terminological muddle, we define reality as our conscious experience of the environment as emergent perception and we use this definition as the basis for a discussion of the role of sound in balancing sensory information and in the construction of a less cluttered and less stressful perceptual environments.","PeriodicalId":368424,"journal":{"name":"Proceedings of the 15th International Audio Mostly Conference","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128091548","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Music technology has advanced remarkably since the 1980s, yet the 8-bit sounds of computers and video game consoles from that era are still considered iconic and difficult to replicate. The sound chips originally used in these devices are no longer compatible with modern tools for music making, heavily constraining the further exploration of this popular aesthetics. With this paper, I present the ongoing development of a novel platform, built with open-source embedded technologies, and designed for the integration of vintage sound chips in widely used music programming and instrument design frameworks. The goal of the project is to innovate chiptune music practice, while preserving the role of authentic hardware and fostering the appropriation of its signature limitations.
{"title":"From 8-bit punk to 8-bit avant-garde: designing an embedded platform to control vintage sound chips","authors":"Victor Zappi","doi":"10.1145/3411109.3411148","DOIUrl":"https://doi.org/10.1145/3411109.3411148","url":null,"abstract":"Music technology has advanced remarkably since the 1980s, yet the 8-bit sounds of computers and video game consoles from that era are still considered iconic and difficult to replicate. The sound chips originally used in these devices are no longer compatible with modern tools for music making, heavily constraining the further exploration of this popular aesthetics. With this paper, I present the ongoing development of a novel platform, built with open-source embedded technologies, and designed for the integration of vintage sound chips in widely used music programming and instrument design frameworks. The goal of the project is to innovate chiptune music practice, while preserving the role of authentic hardware and fostering the appropriation of its signature limitations.","PeriodicalId":368424,"journal":{"name":"Proceedings of the 15th International Audio Mostly Conference","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124964019","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper we present the Quantum Synth project, an interface between Qiskit and Max for controlling sound synthesis parameters encoded on the basis states of a quantum computer. This sound synthesis is obtained from the potential measured outcomes of a quantum circuit. The effects of using fundamental quantum operations as found in the Bell circuit, for the generation of entangled states, and the Grover's search algorithm have been demonstrated. The interface has been designed to be used by music performers and composers in their creative process, and as a resource to both learn Quantum Computing and analyze the intrinsic noise of real quantum hardware.
{"title":"Quantum synth: a quantum-computing-based synthesizer","authors":"Omar Costa Hamido, G. Cirillo, Edoardo Giusto","doi":"10.1145/3411109.3411135","DOIUrl":"https://doi.org/10.1145/3411109.3411135","url":null,"abstract":"In this paper we present the Quantum Synth project, an interface between Qiskit and Max for controlling sound synthesis parameters encoded on the basis states of a quantum computer. This sound synthesis is obtained from the potential measured outcomes of a quantum circuit. The effects of using fundamental quantum operations as found in the Bell circuit, for the generation of entangled states, and the Grover's search algorithm have been demonstrated. The interface has been designed to be used by music performers and composers in their creative process, and as a resource to both learn Quantum Computing and analyze the intrinsic noise of real quantum hardware.","PeriodicalId":368424,"journal":{"name":"Proceedings of the 15th International Audio Mostly Conference","volume":"182 5","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120852685","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The project emphSounding Feet explores the creative possibilities of interactively controlling sound synthesis through pressure sensitive shoe inlays that can monitor minute body movements. The project is motivated by the authors' own experience of working with interactive technologies in the context of dance. This experience has led to the desire to more closely relate the sensing capabilities of an interactive system to a dancer's own body awareness which prominently involve aspects of inner perception. The outcome of this project demonstrates that such an approach can help to establish interactive musical scenarios for dance that are not only more intuitive to work with for dancers but that also offer new possibilities for composers to tap into aspects of the dancers' expressivity that are normally hidden for an audience.
{"title":"Sounding feet","authors":"D. Bisig, Pablo Palacio","doi":"10.1145/3411109.3411112","DOIUrl":"https://doi.org/10.1145/3411109.3411112","url":null,"abstract":"The project emphSounding Feet explores the creative possibilities of interactively controlling sound synthesis through pressure sensitive shoe inlays that can monitor minute body movements. The project is motivated by the authors' own experience of working with interactive technologies in the context of dance. This experience has led to the desire to more closely relate the sensing capabilities of an interactive system to a dancer's own body awareness which prominently involve aspects of inner perception. The outcome of this project demonstrates that such an approach can help to establish interactive musical scenarios for dance that are not only more intuitive to work with for dancers but that also offer new possibilities for composers to tap into aspects of the dancers' expressivity that are normally hidden for an audience.","PeriodicalId":368424,"journal":{"name":"Proceedings of the 15th International Audio Mostly Conference","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130951419","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}