首页 > 最新文献

Proceedings of the 15th International Audio Mostly Conference最新文献

英文 中文
Voice-based interface for accessible soundscape composition: composing soundscapes by vocally querying online sounds repositories 基于语音的可访问音景组合接口:通过语音查询在线声音库来组合音景
Pub Date : 2020-09-15 DOI: 10.1145/3411109.3411113
L. Turchet, Alex Zanetti
This paper presents an Internet of Audio Things ecosystem devised to support soundscape composition via vocal interactions. The ecosystem involves a commercial voice-based interface and the cloud-based repository of audio content Freesound.org. The user-system interactions are exclusively based on vocal input/outputs, and differ from the conventional methods for retrieval and sound editing which involve a browser and programs running on a desktop PC. The developed ecosystem targets sound designers interested in soundscape composition and in particular the visually-impaired ones, with the aim of making the soundscape composition practice more accessible. We report the results of a user study conducted with twelve participants. Overall, results show that the interface was found usable and was deemed easy to use and to learn. Participants reported to have enjoyed using the system and generally felt that it was effective in supporting their creativity during the process of composing a soundscape.
本文提出了一个音频物联网生态系统,旨在通过声音交互来支持音景合成。这个生态系统包括基于商业语音的界面和基于云的音频内容存储库Freesound.org。用户-系统交互完全基于声音输入/输出,与传统的检索和声音编辑方法不同,后者涉及在台式电脑上运行的浏览器和程序。开发的生态系统针对对声景作曲感兴趣的声音设计师,特别是视障人士,目的是使声景作曲实践更容易接近。我们报告了一项由12名参与者参与的用户研究的结果。总体而言,结果表明该界面是可用的,并且被认为易于使用和学习。参与者报告说,他们很喜欢使用这个系统,并且普遍认为,在创作音景的过程中,它有效地支持了他们的创造力。
{"title":"Voice-based interface for accessible soundscape composition: composing soundscapes by vocally querying online sounds repositories","authors":"L. Turchet, Alex Zanetti","doi":"10.1145/3411109.3411113","DOIUrl":"https://doi.org/10.1145/3411109.3411113","url":null,"abstract":"This paper presents an Internet of Audio Things ecosystem devised to support soundscape composition via vocal interactions. The ecosystem involves a commercial voice-based interface and the cloud-based repository of audio content Freesound.org. The user-system interactions are exclusively based on vocal input/outputs, and differ from the conventional methods for retrieval and sound editing which involve a browser and programs running on a desktop PC. The developed ecosystem targets sound designers interested in soundscape composition and in particular the visually-impaired ones, with the aim of making the soundscape composition practice more accessible. We report the results of a user study conducted with twelve participants. Overall, results show that the interface was found usable and was deemed easy to use and to learn. Participants reported to have enjoyed using the system and generally felt that it was effective in supporting their creativity during the process of composing a soundscape.","PeriodicalId":368424,"journal":{"name":"Proceedings of the 15th International Audio Mostly Conference","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133421858","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Contrasts and similarities between two audio research communities in evaluating auditory artefacts 对比和相似性之间的两个音频研究社区评估听觉人工制品
Pub Date : 2020-09-15 DOI: 10.1145/3411109.3411146
Mariana Seiça, Licinio Gomes Roque, P. Martins, F. A. Cardoso
The design of auditory artefacts has been establishing its practice as a scientific area for more than 20 years, with a crucial element in this process being how to properly evaluate acoustic outputs. In this paper, we sought to map the evaluation methods applied in a general search inside two main audio-focused conferences: Audio Mostly and the International Conference on Auditory Display (ICAD). Revisiting last year's editions, as well as a keyword-based search in the last ten years, we attempted to gather and classify each evaluation method according to the level of user involvement, their role, and the authors intentions in using each method. We propose an initial mapping for this gathering, in a framework of evaluation approaches which can reinforce and expand current practices in the creation of auditory artefacts.
20多年来,听觉人工制品的设计已经成为一个科学领域,其中一个关键因素是如何正确评估声学输出。在本文中,我们试图在两个主要的以音频为重点的会议:Audio most和国际听觉显示会议(ICAD)中映射应用于一般搜索的评估方法。回顾去年的版本,以及过去十年中基于关键字的搜索,我们试图根据用户参与程度、他们的角色和作者使用每种方法的意图来收集和分类每种评估方法。我们在评估方法的框架中为这次聚会提出了一个初步的映射,该框架可以加强和扩展听觉人工制品创造中的当前实践。
{"title":"Contrasts and similarities between two audio research communities in evaluating auditory artefacts","authors":"Mariana Seiça, Licinio Gomes Roque, P. Martins, F. A. Cardoso","doi":"10.1145/3411109.3411146","DOIUrl":"https://doi.org/10.1145/3411109.3411146","url":null,"abstract":"The design of auditory artefacts has been establishing its practice as a scientific area for more than 20 years, with a crucial element in this process being how to properly evaluate acoustic outputs. In this paper, we sought to map the evaluation methods applied in a general search inside two main audio-focused conferences: Audio Mostly and the International Conference on Auditory Display (ICAD). Revisiting last year's editions, as well as a keyword-based search in the last ten years, we attempted to gather and classify each evaluation method according to the level of user involvement, their role, and the authors intentions in using each method. We propose an initial mapping for this gathering, in a framework of evaluation approaches which can reinforce and expand current practices in the creation of auditory artefacts.","PeriodicalId":368424,"journal":{"name":"Proceedings of the 15th International Audio Mostly Conference","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117081664","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Capturing kinetic wave demonstrations for sound control 捕捉动能波演示声音控制
Pub Date : 2020-09-15 DOI: 10.1145/3411109.3411150
J. Granzow, Matias Vilaplana, Anil Çamci
In musical acoustics, wave propagation, reflection, phase inversion, and boundary conditions can be hard to conceptualize. Physical kinetic wave demonstrations offer visible and tangible experiences of wave behavior and facilitate active learning. We implement such kinetic demonstrations, a long spring and a Shive machine, using contemporary fabrication techniques. Furthermore, we employ motion capture (MoCap) technology to transform these kinetic assemblies into audio controllers. Time-varying coordinates of Mo-Cap markers integrated into the assemblies are mapped to audio parameters, closing a multi-sensory loop where visual analogues of acoustic phenomena are in turn used to control digital audio. The project leads to a pedagogical practice where fabrication and sensing technologies are used to reconstitute demonstrations for the eye as controllers for the ear.
在音乐声学中,波的传播、反射、相位反转和边界条件很难概念化。物理动力波演示提供可见和有形的经验,波浪的行为和促进主动学习。我们使用现代制造技术,实现了这样的动力演示,一个长弹簧和一个Shive机器。此外,我们采用动作捕捉(MoCap)技术将这些动态组件转换为音频控制器。集成到组件中的Mo-Cap标记的时变坐标被映射到音频参数,关闭了一个多感官循环,其中声学现象的视觉模拟物反过来用于控制数字音频。该项目引出了一个教学实践,其中使用制造和传感技术来重建眼睛的演示,作为耳朵的控制器。
{"title":"Capturing kinetic wave demonstrations for sound control","authors":"J. Granzow, Matias Vilaplana, Anil Çamci","doi":"10.1145/3411109.3411150","DOIUrl":"https://doi.org/10.1145/3411109.3411150","url":null,"abstract":"In musical acoustics, wave propagation, reflection, phase inversion, and boundary conditions can be hard to conceptualize. Physical kinetic wave demonstrations offer visible and tangible experiences of wave behavior and facilitate active learning. We implement such kinetic demonstrations, a long spring and a Shive machine, using contemporary fabrication techniques. Furthermore, we employ motion capture (MoCap) technology to transform these kinetic assemblies into audio controllers. Time-varying coordinates of Mo-Cap markers integrated into the assemblies are mapped to audio parameters, closing a multi-sensory loop where visual analogues of acoustic phenomena are in turn used to control digital audio. The project leads to a pedagogical practice where fabrication and sensing technologies are used to reconstitute demonstrations for the eye as controllers for the ear.","PeriodicalId":368424,"journal":{"name":"Proceedings of the 15th International Audio Mostly Conference","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125994320","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Towards molecular musical instruments: interactive sonifications of 17-alanine, graphene and carbon nanotubes 迈向分子乐器:17-丙氨酸、石墨烯和碳纳米管的相互作用超声
Pub Date : 2020-09-15 DOI: 10.1145/3411109.3411143
Thomas J. Mitchell, Alex J. Jones, Michael B. O'Connor, Mark D. Wonnacott, D. Glowacki, J. Hyde
Scientists increasingly rely on computational models of atoms and molecules to observe, understand and make predictions about the microscopic world. Atoms and molecules are in constant motion, with vibrations and structural fluctuations occurring at very short time-scales and corresponding length-scales. But can these microscopic oscillations be converted into sound? And, what would they sound like? In this paper we present our initial steps towards a generalised approach for sonifying data produced by a real-time molecular dynamics simulation. The approach uses scanned synthesis to translate real-time geometric simulation data into audio. The process is embedded within a stand alone application as well as a variety of audio plugin formats to enable the process to be used as an audio synthesis method for music making. We review the relevant background literature before providing an overview of our system. Simulations of three molecules are then considered: 17-alanine, graphene and a carbon nanotube. Four examples are then provided demonstrating how the technique maps molecular features and parameters onto the auditory character of the resulting sound. A case study is then provided in which the sonification/synthesis method is used within a musical composition.
科学家们越来越依赖于原子和分子的计算模型来观察、理解和预测微观世界。原子和分子在不断运动,振动和结构波动发生在很短的时间尺度和相应的长度尺度。但是这些微小的振动能转化成声音吗?它们听起来会是什么样子?在本文中,我们提出了我们对实时分子动力学模拟产生的数据的通用方法的初步步骤。该方法使用扫描合成将实时几何模拟数据转换为音频。该过程嵌入在独立的应用程序以及各种音频插件格式中,以使该过程能够用作用于音乐制作的音频合成方法。在提供我们系统的概述之前,我们回顾了相关的背景文献。然后考虑三种分子的模拟:17-丙氨酸、石墨烯和碳纳米管。然后提供了四个示例,演示该技术如何将分子特征和参数映射到所产生声音的听觉特征上。然后提供了一个案例研究,其中在音乐作品中使用了超声/合成方法。
{"title":"Towards molecular musical instruments: interactive sonifications of 17-alanine, graphene and carbon nanotubes","authors":"Thomas J. Mitchell, Alex J. Jones, Michael B. O'Connor, Mark D. Wonnacott, D. Glowacki, J. Hyde","doi":"10.1145/3411109.3411143","DOIUrl":"https://doi.org/10.1145/3411109.3411143","url":null,"abstract":"Scientists increasingly rely on computational models of atoms and molecules to observe, understand and make predictions about the microscopic world. Atoms and molecules are in constant motion, with vibrations and structural fluctuations occurring at very short time-scales and corresponding length-scales. But can these microscopic oscillations be converted into sound? And, what would they sound like? In this paper we present our initial steps towards a generalised approach for sonifying data produced by a real-time molecular dynamics simulation. The approach uses scanned synthesis to translate real-time geometric simulation data into audio. The process is embedded within a stand alone application as well as a variety of audio plugin formats to enable the process to be used as an audio synthesis method for music making. We review the relevant background literature before providing an overview of our system. Simulations of three molecules are then considered: 17-alanine, graphene and a carbon nanotube. Four examples are then provided demonstrating how the technique maps molecular features and parameters onto the auditory character of the resulting sound. A case study is then provided in which the sonification/synthesis method is used within a musical composition.","PeriodicalId":368424,"journal":{"name":"Proceedings of the 15th International Audio Mostly Conference","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116693886","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
An auditory interface for realtime brainwave similarity in dyads 一个听觉界面实时脑电波相似的二对
Pub Date : 2020-09-15 DOI: 10.1145/3411109.3411147
R. M. Winters, Stephanie Koziej
We present a case-study in the development of a"hyperscanning" auditory interface that transforms realtime brainwave-similarity between interacting dyads into music. Our instrument extends reality in face-to-face communication with a musical stream reflecting an invisible socio-neurophysiological signal. This instrument contributes to the historical context of brain-computer interfaces (BCIs) applied to art and music, but is unique because it is contingent on the correlation between the brainwaves of the dyad, and because it conveys this information using entirely auditory feedback. We designed the instrument to be i) easy to understand, ii) relatable and iii) pleasant for members of the general public in an exhibition context. We present how this context and user group led to our choice of EEG hardware, inter-brain similarity metric, and our auditory mapping strategy. We discuss our experience following four public exhibitions, as well as future improvements to the instrument design and user experience.
我们在“超扫描”听觉界面的开发中提出了一个案例研究,该界面可以将相互作用的双体之间的实时脑波相似性转换为音乐。我们的乐器通过反映无形的社会神经生理信号的音乐流扩展了面对面交流的现实。这种乐器为应用于艺术和音乐的脑机接口(bci)的历史背景做出了贡献,但它的独特之处在于,它依赖于二人组脑电波之间的相关性,而且它完全通过听觉反馈来传达这些信息。我们将仪器设计为i)易于理解,ii)相关,iii)在展览环境中让公众感到愉快。我们介绍了这种背景和用户群体是如何导致我们选择EEG硬件、脑间相似性度量和听觉映射策略的。我们讨论了四次公开展览后的经验,以及仪器设计和用户体验的未来改进。
{"title":"An auditory interface for realtime brainwave similarity in dyads","authors":"R. M. Winters, Stephanie Koziej","doi":"10.1145/3411109.3411147","DOIUrl":"https://doi.org/10.1145/3411109.3411147","url":null,"abstract":"We present a case-study in the development of a\"hyperscanning\" auditory interface that transforms realtime brainwave-similarity between interacting dyads into music. Our instrument extends reality in face-to-face communication with a musical stream reflecting an invisible socio-neurophysiological signal. This instrument contributes to the historical context of brain-computer interfaces (BCIs) applied to art and music, but is unique because it is contingent on the correlation between the brainwaves of the dyad, and because it conveys this information using entirely auditory feedback. We designed the instrument to be i) easy to understand, ii) relatable and iii) pleasant for members of the general public in an exhibition context. We present how this context and user group led to our choice of EEG hardware, inter-brain similarity metric, and our auditory mapping strategy. We discuss our experience following four public exhibitions, as well as future improvements to the instrument design and user experience.","PeriodicalId":368424,"journal":{"name":"Proceedings of the 15th International Audio Mostly Conference","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125846636","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
How do you sound design?: an exploratory investigation of sound design process visualizations 你是如何进行声音设计的?声音设计过程可视化的探索性研究
Pub Date : 2020-09-15 DOI: 10.1145/3411109.3411144
D. Hug
Sound design is increasingly diversifying into many areas beyond its traditional domains in film, television, radio or theatre. This leads to sound designers being confronted with a multitude of design and development processes. The related methodologies have an impact on how problems are framed and what is considered an ideal path to achieve their solutions. From this a need for an educated discourse in sound design education and professional practice arises. This article investigates the creative process from the perspective of an emerging generation of sound designers. The first part of the paper outlines concepts and models of the design process in various fields of practice. The second part is devoted to an interpretive comparative analysis of sound design process visualizations created by sound design students with a professional background. Apart from gaining a better understanding of the creative process of the sound designers, the goal of this work is to contribute to a better integration of the sound design craft into contemporary design process methodologies, ultimately leading to an empowerment of the sound designer in complex, dynamic and interdisciplinary project settings.
声音设计在电影、电视、广播或戏剧等传统领域之外,越来越多元化。这将导致音效设计师面对大量的设计和开发过程。相关的方法学对问题的框架和实现解决方案的理想途径有影响。因此,需要在声音设计教育和专业实践中进行有教育的话语。本文将从新一代声音设计师的角度探讨他们的创作过程。论文的第一部分概述了各个实践领域中设计过程的概念和模型。第二部分致力于对具有专业背景的声音设计专业学生创作的声音设计过程可视化进行解释性比较分析。除了更好地理解声音设计师的创作过程外,这项工作的目标是将声音设计工艺与当代设计过程方法更好地结合起来,最终使声音设计师能够在复杂、动态和跨学科的项目环境中发挥作用。
{"title":"How do you sound design?: an exploratory investigation of sound design process visualizations","authors":"D. Hug","doi":"10.1145/3411109.3411144","DOIUrl":"https://doi.org/10.1145/3411109.3411144","url":null,"abstract":"Sound design is increasingly diversifying into many areas beyond its traditional domains in film, television, radio or theatre. This leads to sound designers being confronted with a multitude of design and development processes. The related methodologies have an impact on how problems are framed and what is considered an ideal path to achieve their solutions. From this a need for an educated discourse in sound design education and professional practice arises. This article investigates the creative process from the perspective of an emerging generation of sound designers. The first part of the paper outlines concepts and models of the design process in various fields of practice. The second part is devoted to an interpretive comparative analysis of sound design process visualizations created by sound design students with a professional background. Apart from gaining a better understanding of the creative process of the sound designers, the goal of this work is to contribute to a better integration of the sound design craft into contemporary design process methodologies, ultimately leading to an empowerment of the sound designer in complex, dynamic and interdisciplinary project settings.","PeriodicalId":368424,"journal":{"name":"Proceedings of the 15th International Audio Mostly Conference","volume":"110 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132530113","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Don't extend! reduce!: the sound approach to reality 不要延长!减少!正确地接近现实
Pub Date : 2020-09-15 DOI: 10.1145/3411109.3411111
Mads Walther-Hansen, M. Grimshaw-Aagaard
In this paper we propose a reduced reality concept of less-is-more that VR designers can use to create technological frameworks that reduce sensory overload and allow for better concentration and focus, less stress, and novel scenarios. We question the approach taken by scholars in the field of XR research, where the focus is typically to design and use technology that adds sensory information to the user's perceptual field and we address some of the confusion related to the typical uses of the term reality. To address the latter terminological muddle, we define reality as our conscious experience of the environment as emergent perception and we use this definition as the basis for a discussion of the role of sound in balancing sensory information and in the construction of a less cluttered and less stressful perceptual environments.
在本文中,我们提出了一个“少即是多”的简化现实概念,VR设计师可以使用它来创建技术框架,以减少感官过载,并允许更好的集中注意力,更少的压力和新颖的场景。我们对XR研究领域的学者所采用的方法提出质疑,这些学者的研究重点通常是设计和使用将感官信息添加到用户感知领域的技术,我们解决了与“现实”一词的典型用法相关的一些混淆。为了解决后一种术语混乱,我们将现实定义为我们对环境的意识体验,即突现感知,并以此定义为基础,讨论声音在平衡感官信息和构建较少混乱和较少压力的感知环境中的作用。
{"title":"Don't extend! reduce!: the sound approach to reality","authors":"Mads Walther-Hansen, M. Grimshaw-Aagaard","doi":"10.1145/3411109.3411111","DOIUrl":"https://doi.org/10.1145/3411109.3411111","url":null,"abstract":"In this paper we propose a reduced reality concept of less-is-more that VR designers can use to create technological frameworks that reduce sensory overload and allow for better concentration and focus, less stress, and novel scenarios. We question the approach taken by scholars in the field of XR research, where the focus is typically to design and use technology that adds sensory information to the user's perceptual field and we address some of the confusion related to the typical uses of the term reality. To address the latter terminological muddle, we define reality as our conscious experience of the environment as emergent perception and we use this definition as the basis for a discussion of the role of sound in balancing sensory information and in the construction of a less cluttered and less stressful perceptual environments.","PeriodicalId":368424,"journal":{"name":"Proceedings of the 15th International Audio Mostly Conference","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128091548","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
From 8-bit punk to 8-bit avant-garde: designing an embedded platform to control vintage sound chips 从8位朋克到8位前卫:设计一个嵌入式平台来控制复古声音芯片
Pub Date : 2020-09-15 DOI: 10.1145/3411109.3411148
Victor Zappi
Music technology has advanced remarkably since the 1980s, yet the 8-bit sounds of computers and video game consoles from that era are still considered iconic and difficult to replicate. The sound chips originally used in these devices are no longer compatible with modern tools for music making, heavily constraining the further exploration of this popular aesthetics. With this paper, I present the ongoing development of a novel platform, built with open-source embedded technologies, and designed for the integration of vintage sound chips in widely used music programming and instrument design frameworks. The goal of the project is to innovate chiptune music practice, while preserving the role of authentic hardware and fostering the appropriation of its signature limitations.
自20世纪80年代以来,音乐技术取得了显著进步,但那个时代的电脑和视频游戏机的8位声音仍然被认为是标志性的,难以复制。最初用于这些设备的声音芯片不再与现代音乐制作工具兼容,严重限制了这种流行美学的进一步探索。在本文中,我介绍了一个正在开发的新平台,该平台采用开源嵌入式技术构建,旨在将复古声音芯片集成到广泛使用的音乐编程和乐器设计框架中。该项目的目标是创新芯片音乐实践,同时保留真实硬件的作用,并促进其签名限制的拨款。
{"title":"From 8-bit punk to 8-bit avant-garde: designing an embedded platform to control vintage sound chips","authors":"Victor Zappi","doi":"10.1145/3411109.3411148","DOIUrl":"https://doi.org/10.1145/3411109.3411148","url":null,"abstract":"Music technology has advanced remarkably since the 1980s, yet the 8-bit sounds of computers and video game consoles from that era are still considered iconic and difficult to replicate. The sound chips originally used in these devices are no longer compatible with modern tools for music making, heavily constraining the further exploration of this popular aesthetics. With this paper, I present the ongoing development of a novel platform, built with open-source embedded technologies, and designed for the integration of vintage sound chips in widely used music programming and instrument design frameworks. The goal of the project is to innovate chiptune music practice, while preserving the role of authentic hardware and fostering the appropriation of its signature limitations.","PeriodicalId":368424,"journal":{"name":"Proceedings of the 15th International Audio Mostly Conference","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124964019","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Quantum synth: a quantum-computing-based synthesizer 量子合成器:一种基于量子计算的合成器
Pub Date : 2020-09-15 DOI: 10.1145/3411109.3411135
Omar Costa Hamido, G. Cirillo, Edoardo Giusto
In this paper we present the Quantum Synth project, an interface between Qiskit and Max for controlling sound synthesis parameters encoded on the basis states of a quantum computer. This sound synthesis is obtained from the potential measured outcomes of a quantum circuit. The effects of using fundamental quantum operations as found in the Bell circuit, for the generation of entangled states, and the Grover's search algorithm have been demonstrated. The interface has been designed to be used by music performers and composers in their creative process, and as a resource to both learn Quantum Computing and analyze the intrinsic noise of real quantum hardware.
在本文中,我们提出了量子合成器项目,Qiskit和Max之间的一个接口,用于控制基于量子计算机基态编码的声音合成参数。这种声音合成是从量子电路的潜在测量结果中获得的。使用在贝尔电路中发现的基本量子操作的影响,对于纠缠态的产生,以及Grover的搜索算法已经被证明。该界面旨在供音乐表演者和作曲家在创作过程中使用,并作为学习量子计算和分析真实量子硬件固有噪声的资源。
{"title":"Quantum synth: a quantum-computing-based synthesizer","authors":"Omar Costa Hamido, G. Cirillo, Edoardo Giusto","doi":"10.1145/3411109.3411135","DOIUrl":"https://doi.org/10.1145/3411109.3411135","url":null,"abstract":"In this paper we present the Quantum Synth project, an interface between Qiskit and Max for controlling sound synthesis parameters encoded on the basis states of a quantum computer. This sound synthesis is obtained from the potential measured outcomes of a quantum circuit. The effects of using fundamental quantum operations as found in the Bell circuit, for the generation of entangled states, and the Grover's search algorithm have been demonstrated. The interface has been designed to be used by music performers and composers in their creative process, and as a resource to both learn Quantum Computing and analyze the intrinsic noise of real quantum hardware.","PeriodicalId":368424,"journal":{"name":"Proceedings of the 15th International Audio Mostly Conference","volume":"182 5","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120852685","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Sounding feet 测深尺
Pub Date : 2020-09-15 DOI: 10.1145/3411109.3411112
D. Bisig, Pablo Palacio
The project emphSounding Feet explores the creative possibilities of interactively controlling sound synthesis through pressure sensitive shoe inlays that can monitor minute body movements. The project is motivated by the authors' own experience of working with interactive technologies in the context of dance. This experience has led to the desire to more closely relate the sensing capabilities of an interactive system to a dancer's own body awareness which prominently involve aspects of inner perception. The outcome of this project demonstrates that such an approach can help to establish interactive musical scenarios for dance that are not only more intuitive to work with for dancers but that also offer new possibilities for composers to tap into aspects of the dancers' expressivity that are normally hidden for an audience.
这个名为“强调脚”的项目探索了通过压力感应鞋镶嵌物来交互控制声音合成的创造性可能性,这种鞋镶嵌物可以监测微小的身体动作。这个项目的灵感来自于作者自己在舞蹈背景下使用互动技术的经验。这一经历促使我们希望将互动系统的感知能力与舞者自身的身体意识更紧密地联系起来,这主要涉及到内在感知的各个方面。这个项目的结果表明,这种方法可以帮助建立舞蹈的交互式音乐场景,不仅对舞者来说更直观,而且还为作曲家提供了新的可能性,以挖掘舞者通常隐藏在观众面前的表现力方面。
{"title":"Sounding feet","authors":"D. Bisig, Pablo Palacio","doi":"10.1145/3411109.3411112","DOIUrl":"https://doi.org/10.1145/3411109.3411112","url":null,"abstract":"The project emphSounding Feet explores the creative possibilities of interactively controlling sound synthesis through pressure sensitive shoe inlays that can monitor minute body movements. The project is motivated by the authors' own experience of working with interactive technologies in the context of dance. This experience has led to the desire to more closely relate the sensing capabilities of an interactive system to a dancer's own body awareness which prominently involve aspects of inner perception. The outcome of this project demonstrates that such an approach can help to establish interactive musical scenarios for dance that are not only more intuitive to work with for dancers but that also offer new possibilities for composers to tap into aspects of the dancers' expressivity that are normally hidden for an audience.","PeriodicalId":368424,"journal":{"name":"Proceedings of the 15th International Audio Mostly Conference","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130951419","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
期刊
Proceedings of the 15th International Audio Mostly Conference
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1