Abstract This article presents design and performance practices for movement-based digital musical instruments. We develop the notion of borrowed gestures, which is a gesture-first approach that composes a gestural vocabulary of nonmusical body movements combined with nuanced instrumental gestures. These practices explore new affordances for physical interaction by transferring the expressive qualities and communicative aspects of body movements; these body movements and their qualities are borrowed from nonmusical domains. By merging musical and nonmusical domains through movement interaction, borrowed gestures offer shared performance spaces and cross-disciplinary practices. Our approach centers on use of the body and the design with body movement when developing digital musical instruments. The performer's body becomes an intermediate medium, physically connecting and uniting the performer and the instrument. This approach creates new ways of conceptualizing and designing movement-based musical interaction: (1) offering a design framework that transforms a broader range of expressive gestures (including nonmusical gestures) into sonic and musical interactions, and (2) creating a new dynamic between performer and instrument that reframes nonmusical gestures—such as dance movements or sign language gestures—into musical contexts. We aesthetically evaluate our design framework and performance practices based on three case studies: Bodyharp, Armtop, and Felt Sound. As part of this evaluation, we also present a set of design principles as a way of thinking about designing movement-based digital musical instruments.
{"title":"Borrowed Gestures: The Body as an Extension of the Musical Instrument","authors":"Doga Cavdir;Ge Wang","doi":"10.1162/comj_a_00617","DOIUrl":"10.1162/comj_a_00617","url":null,"abstract":"Abstract This article presents design and performance practices for movement-based digital musical instruments. We develop the notion of borrowed gestures, which is a gesture-first approach that composes a gestural vocabulary of nonmusical body movements combined with nuanced instrumental gestures. These practices explore new affordances for physical interaction by transferring the expressive qualities and communicative aspects of body movements; these body movements and their qualities are borrowed from nonmusical domains. By merging musical and nonmusical domains through movement interaction, borrowed gestures offer shared performance spaces and cross-disciplinary practices. Our approach centers on use of the body and the design with body movement when developing digital musical instruments. The performer's body becomes an intermediate medium, physically connecting and uniting the performer and the instrument. This approach creates new ways of conceptualizing and designing movement-based musical interaction: (1) offering a design framework that transforms a broader range of expressive gestures (including nonmusical gestures) into sonic and musical interactions, and (2) creating a new dynamic between performer and instrument that reframes nonmusical gestures—such as dance movements or sign language gestures—into musical contexts. We aesthetically evaluate our design framework and performance practices based on three case studies: Bodyharp, Armtop, and Felt Sound. As part of this evaluation, we also present a set of design principles as a way of thinking about designing movement-based digital musical instruments.","PeriodicalId":50639,"journal":{"name":"Computer Music Journal","volume":"45 3","pages":"58-80"},"PeriodicalIF":0.0,"publicationDate":"2021-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43739178","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
that conceptual space of materiality, virtual place, and virtual place as apparatus, and in a spiritual space where preexisting material features become part of virtual image (credit to virtual image scholar and professor Lisa Zaher). The concert’s final piece, “Moksha Black” by King Britt featuring Roba El-Essawy, disseminates sonospatial models that can be transduced (based on registered aspects) into one’s metaphysical space (which is cognitively and psychoacoustically entangled). For example, the initial voices are farther and larger in metaphysical space than physical space; then, a square-like tone cuts through in metaphysical space; and later, voice is physically still while circling and stuttering in metaphysical space. Towards the end, voices are vertical poles (in allocentric space) from which frequency spectra fall like glitter and sparks. “Sun Ra’s gift was understanding the passage of humankind through large swaths of time. . . . The sound of those insects. That continuity. Who’s to say that that sound . . . can’t be interpreted as a meaningful sequence of something like language abiding to something like a grammar?” (Thomas Stanley, in conversation with the reviewer). In his keynote address “You Haven’t Met the Captain of the Spaceship. . . Yet,” Thomas Stanley presented extensive info about interfacing with and interpreting Sun Ra’s teachings, including myth as tech—specifically, Alter Destiny, a leap into a zone of justice that is now possible because the original myth of dominion has gradually become unstable. It involves solving the many crises (e.g., racism, intergroup conflict, “extractive capitalism and the filth that goes along with this way of life,” potential mutually assured destruction, capitalist labor, an American empire whose populace is largely “distracted, paid off, sedated by . . . the fruits of oppression that happen in other peoples’ country”) predicated on that myth, simultaneously. This seems impossible, but the resolve “to be that broad in our attempts to ameliorate the situation is the starting point” (Stanley, in conversation with the reviewer). Sun Ra’s music contains messages that can help us question our fundamental beliefs rooted in that myth. The Sounds In Focus II concert begins with “The Shaman Ascending” by Barry Truax: a constantly circling vocal not circling in metaphysical space, through which spectral processes sculpt a spider-shaped cavern around me in allocentric and metaphysical space. In “Abwesenheit,” John Young clinically and playfully makes audible the air currents and stases in the room. Lidia Zielinska’s “Backstage Pass” treats idiomatic piano moments as seeds nourished with playful curiosity and passion, presented with spatial polymatic frequency poiesis in a room-sized piano bed. To start the Sounds Cubed II concert, centripetal whispers in “śūnyatā” by Chris Coleman construct connective tissue to the Cube’s center. In “Toys” by Orestis Karamanlis, flutters of sonic pulses along the pe
这是物质性的概念空间,虚拟空间,以及作为工具的虚拟空间,在一个精神空间中,先前存在的物质特征成为虚拟图像的一部分(归功于虚拟图像学者和教授Lisa Zaher)。音乐会的最后一件作品是由金·布里特(King Britt)创作的《黑色的莫克沙》(Moksha Black),由罗巴·埃尔-埃萨维(Roba El-Essawy)创作,它传播的声空间模型可以(基于注册的方面)转导到一个人的形而上学空间(这是认知和心理声学纠缠的)。例如,最初的声音在形而上空间中比在物理空间中更远更大;然后,一个方形的音调在形而上的空间中切入;后来,声音在形而上的空间中盘旋和结巴时,在物理上是静止的。最后,声音是垂直的极点(在非中心空间中),频谱从那里像闪光和火花一样下降。孙拉的天赋是理解人类在漫长的时间里的历程. . . .那些昆虫的声音。连续性。谁说那声音…难道不能被解释为语言之类的东西遵循语法之类的有意义的序列吗?(托马斯·斯坦利(Thomas Stanley)在与评论家的谈话中说道)。在他的主题演讲“你还没见过宇宙飞船的船长……”然而,“托马斯·斯坦利提供了大量的信息,关于如何与孙拉的教义进行交互和解释,包括作为技术的神话,具体来说,改变命运,一个跨入正义区域的飞跃,现在是可能的,因为最初的统治神话逐渐变得不稳定。它涉及到解决许多危机(例如,种族主义、群体间冲突、“掠夺性资本主义和伴随这种生活方式而来的肮脏”、潜在的相互毁灭、资本主义劳动、一个民众基本上“被分散注意力、被收买、被麻醉”的美利坚帝国)。压迫的果实发生在其他民族的国家”)同时基于这个神话。这似乎是不可能的,但决心“在我们试图改善这种情况时做到如此广泛是起点”(斯坦利,在与评论家的谈话中)。孙拉的音乐包含的信息可以帮助我们质疑根植于神话中的基本信念。《Sounds In Focus II》音乐会以Barry Truax的《The Shaman Ascending》开始:一个不断旋转的声音,而不是在形而上学的空间中旋转,通过它,光谱过程在我周围的异心和形而上学的空间中雕刻了一个蜘蛛形的洞穴。在《阿威森海》(Abwesenheit)中,约翰·杨(John Young)冷静而顽皮地让房间里的气流和气流都能听到。Lidia Zielinska的“后台通行证”将惯有的钢琴时刻视为充满好奇和激情的种子,在一个房间大小的钢琴床上以空间多频波呈现。在“声音立方体II”音乐会开始时,克里斯·科尔曼(Chris Coleman)在“śūnyatā”中向心低语,构建了与立方体中心的结缔组织。在oretis Karamanlis的《玩具》(Toys)中,声波脉冲沿圆周振荡形成空间平衡(中心稳定)的图案,清晰地勾勒出直观地映射到人的身体的形状。值得注意的是,很容易想象一个人的身体传感器位于图案上。这种模式以一种类似于点云的方式表达和分泌空间(顺便说一下,很容易想象出这种模式的实时镜像),但也需要特定的速度和脉冲的时间安排,这激发了对虚拟身体和空间感的进一步研究。立方体中的聆听大厅展示了Eric Lyon对Sun Ra改变思维的专辑《Space Is The Place》的空间化,其中乐器的空间方向和交织线的排列构成了一个巨大的浮动头部,定向火焰推进负声火箭,形象化的超立方体,以及在更密集的介质中每个声源的球形生命辐射。观众非常专注,这种空间化让许多人听到了专辑中的新声音。这是2022年Cube Fest的完美收官之作,它不仅令人惊叹不已,而且是一个巨大的历史事件,激发了无数未来的实验。我期待着下一个。
{"title":"Leo Magnien: Clarières","authors":"Seth Rozanoff","doi":"10.1162/comj_r_00618","DOIUrl":"10.1162/comj_r_00618","url":null,"abstract":"that conceptual space of materiality, virtual place, and virtual place as apparatus, and in a spiritual space where preexisting material features become part of virtual image (credit to virtual image scholar and professor Lisa Zaher). The concert’s final piece, “Moksha Black” by King Britt featuring Roba El-Essawy, disseminates sonospatial models that can be transduced (based on registered aspects) into one’s metaphysical space (which is cognitively and psychoacoustically entangled). For example, the initial voices are farther and larger in metaphysical space than physical space; then, a square-like tone cuts through in metaphysical space; and later, voice is physically still while circling and stuttering in metaphysical space. Towards the end, voices are vertical poles (in allocentric space) from which frequency spectra fall like glitter and sparks. “Sun Ra’s gift was understanding the passage of humankind through large swaths of time. . . . The sound of those insects. That continuity. Who’s to say that that sound . . . can’t be interpreted as a meaningful sequence of something like language abiding to something like a grammar?” (Thomas Stanley, in conversation with the reviewer). In his keynote address “You Haven’t Met the Captain of the Spaceship. . . Yet,” Thomas Stanley presented extensive info about interfacing with and interpreting Sun Ra’s teachings, including myth as tech—specifically, Alter Destiny, a leap into a zone of justice that is now possible because the original myth of dominion has gradually become unstable. It involves solving the many crises (e.g., racism, intergroup conflict, “extractive capitalism and the filth that goes along with this way of life,” potential mutually assured destruction, capitalist labor, an American empire whose populace is largely “distracted, paid off, sedated by . . . the fruits of oppression that happen in other peoples’ country”) predicated on that myth, simultaneously. This seems impossible, but the resolve “to be that broad in our attempts to ameliorate the situation is the starting point” (Stanley, in conversation with the reviewer). Sun Ra’s music contains messages that can help us question our fundamental beliefs rooted in that myth. The Sounds In Focus II concert begins with “The Shaman Ascending” by Barry Truax: a constantly circling vocal not circling in metaphysical space, through which spectral processes sculpt a spider-shaped cavern around me in allocentric and metaphysical space. In “Abwesenheit,” John Young clinically and playfully makes audible the air currents and stases in the room. Lidia Zielinska’s “Backstage Pass” treats idiomatic piano moments as seeds nourished with playful curiosity and passion, presented with spatial polymatic frequency poiesis in a room-sized piano bed. To start the Sounds Cubed II concert, centripetal whispers in “śūnyatā” by Chris Coleman construct connective tissue to the Cube’s center. In “Toys” by Orestis Karamanlis, flutters of sonic pulses along the pe","PeriodicalId":50639,"journal":{"name":"Computer Music Journal","volume":"45 3","pages":"83-85"},"PeriodicalIF":0.0,"publicationDate":"2021-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47004546","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract EmissionControl2 (EC2) is a precision tool that provides a versatile and expressive platform for granular synthesis education, research, performance, and studio composition. It is available as a free download on all major operating systems. In this article, we describe the theoretical underpinnings of the software and expose the design choices made in creating this instrument. We present a brief historical overview and cover the main features of EC2, with an emphasis on per-grain processing, which renders each grain as a unique particle of sound. We discuss the graphical user interface design choices, the theory of operation, and intended use cases that guided these choices. We describe the architecture of the real-time per-grain granular engine, which emits grains in synchronous or asynchronous streams. We conclude with an evaluation of the software.
{"title":"Architecture for Real-Time Granular Synthesis With Per-Grain Processing: EmissionControl2","authors":"Curtis Roads;Jack Kilgore;Rodney DuPlessis","doi":"10.1162/comj_a_00613","DOIUrl":"10.1162/comj_a_00613","url":null,"abstract":"Abstract EmissionControl2 (EC2) is a precision tool that provides a versatile and expressive platform for granular synthesis education, research, performance, and studio composition. It is available as a free download on all major operating systems. In this article, we describe the theoretical underpinnings of the software and expose the design choices made in creating this instrument. We present a brief historical overview and cover the main features of EC2, with an emphasis on per-grain processing, which renders each grain as a unique particle of sound. We discuss the graphical user interface design choices, the theory of operation, and intended use cases that guided these choices. We describe the architecture of the real-time per-grain granular engine, which emits grains in synchronous or asynchronous streams. We conclude with an evaluation of the software.","PeriodicalId":50639,"journal":{"name":"Computer Music Journal","volume":"45 3","pages":"20-38"},"PeriodicalIF":0.0,"publicationDate":"2021-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46638143","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hubert Howe (see Figure 1) received AB, MFA, and PhD degrees from Princeton University, where he studied with J. K. (“Jim”) Randall, Godfrey Winham, and Milton Babbitt. As one of the early researchers in computer music, he was a principal contributor to the development of the Music 4B and Music 4BF programs. In 1968, he joined the faculty of Queens College of the City University of New York (CUNY), where he became a professor of music and director of the electronic music studios. He also taught computer music at the Juilliard School in Manhattan for 20 years. Howe has been a member of the American Composers Alliance since 1974 and has served as its President from 2002 to 2011. He is also a member of the New York Composers Circle and has served as Executive Director since 2013. He is currently active as Director of the New York City Electroacoustic Music Festival, which he founded in 2009. Recordings of his music have been released on the labels Capstone and Centaur, among others. This conversation took place over Zoom during March and April 2022. It begins with a look at Howe’s student years at Princeton and traces his pioneering journey through to his musical activity today. Aspects of his composition and programming work are discussed, as well as his thoughts on pitch structure and timbral approaches to composition. More information about his music and work can be found at http://www.huberthowe.org.
{"title":"Fundamental Sound: A Conversation with Hubert Howe","authors":"Mark Zaki","doi":"10.1162/comj_a_00611","DOIUrl":"10.1162/comj_a_00611","url":null,"abstract":"Hubert Howe (see Figure 1) received AB, MFA, and PhD degrees from Princeton University, where he studied with J. K. (“Jim”) Randall, Godfrey Winham, and Milton Babbitt. As one of the early researchers in computer music, he was a principal contributor to the development of the Music 4B and Music 4BF programs. In 1968, he joined the faculty of Queens College of the City University of New York (CUNY), where he became a professor of music and director of the electronic music studios. He also taught computer music at the Juilliard School in Manhattan for 20 years. Howe has been a member of the American Composers Alliance since 1974 and has served as its President from 2002 to 2011. He is also a member of the New York Composers Circle and has served as Executive Director since 2013. He is currently active as Director of the New York City Electroacoustic Music Festival, which he founded in 2009. Recordings of his music have been released on the labels Capstone and Centaur, among others. This conversation took place over Zoom during March and April 2022. It begins with a look at Howe’s student years at Princeton and traces his pioneering journey through to his musical activity today. Aspects of his composition and programming work are discussed, as well as his thoughts on pitch structure and timbral approaches to composition. More information about his music and work can be found at http://www.huberthowe.org.","PeriodicalId":50639,"journal":{"name":"Computer Music Journal","volume":"45 3","pages":"9-19"},"PeriodicalIF":0.0,"publicationDate":"2021-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42794851","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-07-28DOI: 10.1093/obo/9780199757824-0294
Joseph Hanson Kwabena Nketia (b. 22 June 1921–d. 13 March 2019) from Ghana was the preeminent scholar of African musics, whose field research in the 1940s in varied ways formed the foundation of music scholarship in Africa and predated ethnomusicology as an academic discipline in the United States. A prolific writer, music educator, and composer, his publications on key topics in African musicology are pivotal to the transdisciplinary field of African studies. Born and raised in Asante Mampong, Nketia was tutored in two worlds of knowledge systems: his traditional musical environment generated and sustained a lifelong interest in indigenous systems, and his European-based formal education provided the space for scholarship at home and around the world. At the Presbyterian Training College at Akropong-Akwapem, he was introduced to the elements of European music by Robert Danso and Ephraim Amu. The latter’s choral and instrumental music in the African idiom made a lasting impression on Nketia as he combined oral compositional conventions in traditional music with compositional models in European classical music in his own written compositions. From 1944 to 1949, Nketia studied modern linguistics in SOAS at the University of London. His mentor was John Firth, who spearheaded the famous London school of linguistics. He also enrolled at the Trinity College of Music and Birkbeck College to study Western music, English, and history. The result of his studies in linguistics and history are the publications of classic texts cited in this bibliography. From 1952 to 1979, Nketia held positions at the University of Ghana including a research fellow in sociology, the founding director of the School of Performing Arts, and the first African director of the Institute of African Studies; and together with Mawere Opoku, he established the Ghana Dance Ensemble. This was a time that he embarked on extensive field research and documentation of music traditions all over Ghana. His students and the school provided creative outlets for his scholarly publications as he trained generations of Ghanaians. In 1958, a Rockefeller Foundation Fellowship enabled Nketia to study composition and musicology at Juilliard and Columbia with the likes of Henry Cowell, and he came out convinced that his compositions should reflect his African identity. Further, he interacted with Curt Sachs, Melville Herskovits, Alan Merriam, and Mantle Hood, which placed Nketia at the center of intellectual debates in the formative years of ethnomusicology. From 1979 to 1983, Nketia was appointed to the faculty of the Institute of Ethnomusicology at UCLA; and from 1983 to 1991, to the Mellon Chair at the University of Pittsburgh, where he trained generations of Americans and Africans. Nketia returned to Ghana and founded the International Center for African Music and Dance (1992–2010) and also served as the first chancellor of the Akrofi-Christaller Institute of Theology (2006–2016). Joseph Hanson Kwa
{"title":"J.H. Kwabena Nketia","authors":"","doi":"10.1093/obo/9780199757824-0294","DOIUrl":"https://doi.org/10.1093/obo/9780199757824-0294","url":null,"abstract":"Joseph Hanson Kwabena Nketia (b. 22 June 1921–d. 13 March 2019) from Ghana was the preeminent scholar of African musics, whose field research in the 1940s in varied ways formed the foundation of music scholarship in Africa and predated ethnomusicology as an academic discipline in the United States. A prolific writer, music educator, and composer, his publications on key topics in African musicology are pivotal to the transdisciplinary field of African studies. Born and raised in Asante Mampong, Nketia was tutored in two worlds of knowledge systems: his traditional musical environment generated and sustained a lifelong interest in indigenous systems, and his European-based formal education provided the space for scholarship at home and around the world. At the Presbyterian Training College at Akropong-Akwapem, he was introduced to the elements of European music by Robert Danso and Ephraim Amu. The latter’s choral and instrumental music in the African idiom made a lasting impression on Nketia as he combined oral compositional conventions in traditional music with compositional models in European classical music in his own written compositions. From 1944 to 1949, Nketia studied modern linguistics in SOAS at the University of London. His mentor was John Firth, who spearheaded the famous London school of linguistics. He also enrolled at the Trinity College of Music and Birkbeck College to study Western music, English, and history. The result of his studies in linguistics and history are the publications of classic texts cited in this bibliography. From 1952 to 1979, Nketia held positions at the University of Ghana including a research fellow in sociology, the founding director of the School of Performing Arts, and the first African director of the Institute of African Studies; and together with Mawere Opoku, he established the Ghana Dance Ensemble. This was a time that he embarked on extensive field research and documentation of music traditions all over Ghana. His students and the school provided creative outlets for his scholarly publications as he trained generations of Ghanaians. In 1958, a Rockefeller Foundation Fellowship enabled Nketia to study composition and musicology at Juilliard and Columbia with the likes of Henry Cowell, and he came out convinced that his compositions should reflect his African identity. Further, he interacted with Curt Sachs, Melville Herskovits, Alan Merriam, and Mantle Hood, which placed Nketia at the center of intellectual debates in the formative years of ethnomusicology. From 1979 to 1983, Nketia was appointed to the faculty of the Institute of Ethnomusicology at UCLA; and from 1983 to 1991, to the Mellon Chair at the University of Pittsburgh, where he trained generations of Americans and Africans. Nketia returned to Ghana and founded the International Center for African Music and Dance (1992–2010) and also served as the first chancellor of the Akrofi-Christaller Institute of Theology (2006–2016). Joseph Hanson Kwa","PeriodicalId":50639,"journal":{"name":"Computer Music Journal","volume":"77 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-07-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85702174","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This issue’s articles each consider a different area of audio processing. The first three deal with specific signal-processing techniques in the areas of filtering, spatialization, and synthesis, respectively. The fourth concerns data mining in audio corpora, typically employing descriptors obtained from signal processing. In the first article, Lazzarini and Timoney present their digital filter designs that are derived from analog filters. The authors contend that examining the high-level block diagrams and transfer functions of an analog model can yield benefits not found in the “virtual analog” approach of attempting to analyze and reproduce every detail of a specific analog circuit. As evidence, they offer both linear and nonlinear versions of a digital filter derived from the analog state variable filter. They then extend the nonlinear design to a filter that goes beyond the analog model by incorporating ideas stemming from waveshaping synthesis. In the area of spatialization, the article by Schlienger and Khashchanskiy demonstrates how acoustic localization can be used effectively, and at low cost, for tracking the position of a person participating in a musical performance or an art installation. Acoustic localization ascertains the distance and direction of a sound source or a sound recipient. The authors take advantage of loudspeakers already deployed in a performance, adding a measurement signal that is above the frequency range of human hearing to the audible music that the loudspeaker may be concurrently emitting. The human participant
{"title":"About This Issue","authors":"","doi":"10.1162/comj_e_00602","DOIUrl":"10.1162/comj_e_00602","url":null,"abstract":"This issue’s articles each consider a different area of audio processing. The first three deal with specific signal-processing techniques in the areas of filtering, spatialization, and synthesis, respectively. The fourth concerns data mining in audio corpora, typically employing descriptors obtained from signal processing. In the first article, Lazzarini and Timoney present their digital filter designs that are derived from analog filters. The authors contend that examining the high-level block diagrams and transfer functions of an analog model can yield benefits not found in the “virtual analog” approach of attempting to analyze and reproduce every detail of a specific analog circuit. As evidence, they offer both linear and nonlinear versions of a digital filter derived from the analog state variable filter. They then extend the nonlinear design to a filter that goes beyond the analog model by incorporating ideas stemming from waveshaping synthesis. In the area of spatialization, the article by Schlienger and Khashchanskiy demonstrates how acoustic localization can be used effectively, and at low cost, for tracking the position of a person participating in a musical performance or an art installation. Acoustic localization ascertains the distance and direction of a sound source or a sound recipient. The authors take advantage of loudspeakers already deployed in a performance, adding a measurement signal that is above the frequency range of human hearing to the audible music that the loudspeaker may be concurrently emitting. The human participant","PeriodicalId":50639,"journal":{"name":"Computer Music Journal","volume":"45 2","pages":"1-2"},"PeriodicalIF":0.0,"publicationDate":"2021-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48270272","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract This article presents an extension of Iannis Xenakis's Dynamic Stochastic Synthesis (DSS) called Diffusion Dynamic Stochastic Synthesis (DDSS). This extension solves a diffusion equation whose solutions can be used to map particle positions to amplitude values of several breakpoints in a waveform, following traditional concepts of DSS by directly shaping the waveform of a sound. One significant difference between DSS and DDSS is that the latter includes a drift in the Brownian trajectories that each breakpoint experiences through time. Diffusion Dynamic Stochastic Synthesis can also be used in other ways, such as to control the amplitude values of an oscillator bank using additive synthesis, shaping in this case the spectrum, not the waveform. This second modality goes against Xenakis's original desire to depart from classical Fourier synthesis. The results of spectral analyses of the DDSS waveform approach, implemented using the software environment Max, are discussed and compared with the results of a simplified version of DSS to which, despite the similarity in the overall form of the frequency spectrum, noticeable differences are found. In addition to the Max implementation of the basic DDSS algorithm, a MIDI-controlled synthesizer is also presented here. With DDSS we introduce a real physical process, in this case diffusion, into traditional stochastic synthesis. This sort of sonification can suggest models of sound synthesis that are more complex and grounded in physical concepts.
{"title":"A Physically Inspired Implementation of Xenakis's Stochastic Synthesis: Diffusion Dynamic Stochastic Synthesis","authors":"Emilio L. Rojas;Rodrigo F. Cádiz","doi":"10.1162/comj_a_00606","DOIUrl":"10.1162/comj_a_00606","url":null,"abstract":"Abstract This article presents an extension of Iannis Xenakis's Dynamic Stochastic Synthesis (DSS) called Diffusion Dynamic Stochastic Synthesis (DDSS). This extension solves a diffusion equation whose solutions can be used to map particle positions to amplitude values of several breakpoints in a waveform, following traditional concepts of DSS by directly shaping the waveform of a sound. One significant difference between DSS and DDSS is that the latter includes a drift in the Brownian trajectories that each breakpoint experiences through time. Diffusion Dynamic Stochastic Synthesis can also be used in other ways, such as to control the amplitude values of an oscillator bank using additive synthesis, shaping in this case the spectrum, not the waveform. This second modality goes against Xenakis's original desire to depart from classical Fourier synthesis. The results of spectral analyses of the DDSS waveform approach, implemented using the software environment Max, are discussed and compared with the results of a simplified version of DSS to which, despite the similarity in the overall form of the frequency spectrum, noticeable differences are found. In addition to the Max implementation of the basic DDSS algorithm, a MIDI-controlled synthesizer is also presented here. With DDSS we introduce a real physical process, in this case diffusion, into traditional stochastic synthesis. This sort of sonification can suggest models of sound synthesis that are more complex and grounded in physical concepts.","PeriodicalId":50639,"journal":{"name":"Computer Music Journal","volume":"45 2","pages":"48-66"},"PeriodicalIF":0.0,"publicationDate":"2021-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42656475","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Expressive E, creator of the Touche MIDI/CV controller and the Osmose keyboard synthesizer/controller, has teamed up with Applied Acoustics Systems (AAS), renowned for their physical modeling software instruments, to create a new software plug-in instrument called Imagine (see Figure 1). Imagine allows the user to create and play sounds based on the resonant bodies of physical real-life instruments and to modify them to create fantastical instruments and new acoustic landscapes. Expressive E has created hundreds of presets for Imagine based on feedback from musicians, composers, sound designers, and producers. Each preset is made up
{"title":"Products of Interest","authors":"","doi":"10.1162/comj_r_00601","DOIUrl":"https://doi.org/10.1162/comj_r_00601","url":null,"abstract":"Expressive E, creator of the Touche MIDI/CV controller and the Osmose keyboard synthesizer/controller, has teamed up with Applied Acoustics Systems (AAS), renowned for their physical modeling software instruments, to create a new software plug-in instrument called Imagine (see Figure 1). Imagine allows the user to create and play sounds based on the resonant bodies of physical real-life instruments and to modify them to create fantastical instruments and new acoustic landscapes. Expressive E has created hundreds of presets for Imagine based on feedback from musicians, composers, sound designers, and producers. Each preset is made up","PeriodicalId":50639,"journal":{"name":"Computer Music Journal","volume":"45 2","pages":"91-106"},"PeriodicalIF":0.0,"publicationDate":"2021-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49947452","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract The Acoustic Localization Positioning System is the outcome of several years of participatory development with musicians and artists having a stake in sonic arts, collaboratively aiming for nonobtrusive tracking and indoors positioning technology that facilitates spatial interaction and immersion. Based on previous work on application scenarios for spatial reproduction of moving sound sources and the conception of the kinaesthetic interface, a tracking system for spatially interactive sonic arts is presented here. It is an open-source implementation in the form of a stand-alone application and associated Max patches. The implementation uses off-the-shelf, ubiquitous technology. Based on the findings of tests and experiments conducted in extensive creative workshops, we show how the approach addresses several technical problems and overcomes some typical obstacles to immersion in spatially interactive applications in sonic arts.
{"title":"Immersive Spatial Interactivity in Sonic Arts: The Acoustic Localization Positioning System","authors":"Dominik Schlienger;Victor Khashchanskiy","doi":"10.1162/comj_a_00605","DOIUrl":"10.1162/comj_a_00605","url":null,"abstract":"Abstract The Acoustic Localization Positioning System is the outcome of several years of participatory development with musicians and artists having a stake in sonic arts, collaboratively aiming for nonobtrusive tracking and indoors positioning technology that facilitates spatial interaction and immersion. Based on previous work on application scenarios for spatial reproduction of moving sound sources and the conception of the kinaesthetic interface, a tracking system for spatially interactive sonic arts is presented here. It is an open-source implementation in the form of a stand-alone application and associated Max patches. The implementation uses off-the-shelf, ubiquitous technology. Based on the findings of tests and experiments conducted in extensive creative workshops, we show how the approach addresses several technical problems and overcomes some typical obstacles to immersion in spatially interactive applications in sonic arts.","PeriodicalId":50639,"journal":{"name":"Computer Music Journal","volume":"45 2","pages":"24-47"},"PeriodicalIF":0.0,"publicationDate":"2021-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44694106","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Celebrating Electronics: Music by John Bischoff (Concert) and John Bischoff: Bitplicity (Album)","authors":"Ralph Lewis","doi":"10.1162/comj_r_00609","DOIUrl":"10.1162/comj_r_00609","url":null,"abstract":"","PeriodicalId":50639,"journal":{"name":"Computer Music Journal","volume":"45 2","pages":"84-85"},"PeriodicalIF":0.0,"publicationDate":"2021-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49577153","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}