L. Turchet, Samuel Willis, Gustav Andersson, Alberto Gianelli, Michele Benincaso
This paper reports the development of a prototype of smart musical instrument that uses a virtual analog audio plugin in conjunction with a dedicated tangible interface and a platform for embedded audio. The adopted design approach started from an analog synthesizer, passed from its digital emulation, and returned to the analog domain via the real-time, physical control of the digital synthesizer. The prototype can be considered as an instance of a class of musical devices that allow one to give physical form to the control of virtual analog software. We present an analysis of online sources that were retrieved following the release of the prototype at an international music trade show. Overall, results preliminary validate the concept underlying the development of the prototype and reveal its potential for both digital musical instruments development and use. Benefits of the proposed class of musical devices include a higher degree of control intimacy of a plugin compared to its use with conventional interfaces such as mice and screens of desktop computers, as well as the use of audio plugins in ubiquitous musical activities.
{"title":"On making physical the control of audio plugins: the case of the retrologue hardware synthesizer","authors":"L. Turchet, Samuel Willis, Gustav Andersson, Alberto Gianelli, Michele Benincaso","doi":"10.1145/3411109.3411114","DOIUrl":"https://doi.org/10.1145/3411109.3411114","url":null,"abstract":"This paper reports the development of a prototype of smart musical instrument that uses a virtual analog audio plugin in conjunction with a dedicated tangible interface and a platform for embedded audio. The adopted design approach started from an analog synthesizer, passed from its digital emulation, and returned to the analog domain via the real-time, physical control of the digital synthesizer. The prototype can be considered as an instance of a class of musical devices that allow one to give physical form to the control of virtual analog software. We present an analysis of online sources that were retrieved following the release of the prototype at an international music trade show. Overall, results preliminary validate the concept underlying the development of the prototype and reveal its potential for both digital musical instruments development and use. Benefits of the proposed class of musical devices include a higher degree of control intimacy of a plugin compared to its use with conventional interfaces such as mice and screens of desktop computers, as well as the use of audio plugins in ubiquitous musical activities.","PeriodicalId":368424,"journal":{"name":"Proceedings of the 15th International Audio Mostly Conference","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130583730","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We chart the composition and technical development of a non-linear, interactive work for pianist, Disklavier piano and interactive system called Climb!. Collaborating with a composer and performers we focus on the journey of score representations through this complex work, highlighting the processes and challenges faced from composition to performance. We reveal that multiple, distinct score representations of different types and formats were required to define and perform the work. Furthermore, we highlight a number of compromises that were necessitated to create and manage the work's notations, representations and annotations.
{"title":"Following the journey of scores through a complex musical work","authors":"Adrian Hazzard, C. Greenhalgh, Maria Kallionpää","doi":"10.1145/3411109.3411116","DOIUrl":"https://doi.org/10.1145/3411109.3411116","url":null,"abstract":"We chart the composition and technical development of a non-linear, interactive work for pianist, Disklavier piano and interactive system called Climb!. Collaborating with a composer and performers we focus on the journey of score representations through this complex work, highlighting the processes and challenges faced from composition to performance. We reveal that multiple, distinct score representations of different types and formats were required to define and perform the work. Furthermore, we highlight a number of compromises that were necessitated to create and manage the work's notations, representations and annotations.","PeriodicalId":368424,"journal":{"name":"Proceedings of the 15th International Audio Mostly Conference","volume":"103 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133821824","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This study investigates the effectiveness of user design methods to create a sonification for an astronomer who analyses exoplanet meteorological data situated in habitable zones. Requirements about the astronomer's work, the dataset and how to sonify it utilising Grounded Theory were identified. Parameter mapping sonification was used to represent effective transiting radii measurements through subtractive synthesis and spatialization. The design was considered to be effective, allowing the instantaneous identification of a water feature overlooked on a visual graph, even when noise within the dataset overlapped the source signal. The results suggest that multiple parameter mappings provide richer auditory stimuli and semantic qualities in order to allow an improved understanding of the dataset.
{"title":"Sonification of an exoplanetary atmosphere","authors":"M. Quinton, I. Mcgregor, D. Benyon","doi":"10.1145/3411109.3411117","DOIUrl":"https://doi.org/10.1145/3411109.3411117","url":null,"abstract":"This study investigates the effectiveness of user design methods to create a sonification for an astronomer who analyses exoplanet meteorological data situated in habitable zones. Requirements about the astronomer's work, the dataset and how to sonify it utilising Grounded Theory were identified. Parameter mapping sonification was used to represent effective transiting radii measurements through subtractive synthesis and spatialization. The design was considered to be effective, allowing the instantaneous identification of a water feature overlooked on a visual graph, even when noise within the dataset overlapped the source signal. The results suggest that multiple parameter mappings provide richer auditory stimuli and semantic qualities in order to allow an improved understanding of the dataset.","PeriodicalId":368424,"journal":{"name":"Proceedings of the 15th International Audio Mostly Conference","volume":"112 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125571332","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Otto Hans-Martin Lutz, Jacob Leon Kröger, Manuel Schneiderbauer, Jan Maria Kopankiewicz, M. Hauswirth, T. Hermann
Despite 2-factor authentication and other modern approaches, authentication by password is still the most commonly used method on the Internet. Unfortunately, as analyses show, many users still choose weak and easy-to-guess passwords. To alleviate the significant effects of this problem, systems often employ textual or graphical feedback to make the user aware of this problem, which often falls short on engaging the user and achieving the intended user reaction, i.e., choosing a stronger password. In this paper, we introduce auditory feedback as a complementary method to remedy this problem, using the advantages of sound as an affective medium. We investigate the conceptual space of creating usable auditory feedback on password strength, including functional and non-functional requirements, influences and design constraints. We present web-based implementations of four sonification designs for evaluating different characteristics of the conceptual space and define a research roadmap for optimization, evaluation and applications.
{"title":"That password doesn't sound right: interactive password strength sonification","authors":"Otto Hans-Martin Lutz, Jacob Leon Kröger, Manuel Schneiderbauer, Jan Maria Kopankiewicz, M. Hauswirth, T. Hermann","doi":"10.1145/3411109.3412299","DOIUrl":"https://doi.org/10.1145/3411109.3412299","url":null,"abstract":"Despite 2-factor authentication and other modern approaches, authentication by password is still the most commonly used method on the Internet. Unfortunately, as analyses show, many users still choose weak and easy-to-guess passwords. To alleviate the significant effects of this problem, systems often employ textual or graphical feedback to make the user aware of this problem, which often falls short on engaging the user and achieving the intended user reaction, i.e., choosing a stronger password. In this paper, we introduce auditory feedback as a complementary method to remedy this problem, using the advantages of sound as an affective medium. We investigate the conceptual space of creating usable auditory feedback on password strength, including functional and non-functional requirements, influences and design constraints. We present web-based implementations of four sonification designs for evaluating different characteristics of the conceptual space and define a research roadmap for optimization, evaluation and applications.","PeriodicalId":368424,"journal":{"name":"Proceedings of the 15th International Audio Mostly Conference","volume":"55 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125868376","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The goal of this study was to explore the effects of different spatial sound configurations on visual attention and cognitive effort in an immersive environment. For that purpose, different groups of people were exposed to the same immersive video, but with different soundtrack conditions: mono, stereo, 5.1 and 7.4.1. The different sound conditions consisted of different artistic adaptations of the same soundtrack. During the visualization of the video, participants wore an eye-tracking device and were asked to perform a counting task. Gaze direction and pupil dilation metrics were obtained, as measures of attention and cognitive effort. Results demonstrate that the conditions 5.1 and 7.4.1 were associated with larger distributions of the visual attention, with subjects spending more time gazing at task-irrelevant areas on the screen. The sound condition which led to more concentrated attention on the task-relevant area was mono. The wider the spatial sound configuration, the greater the gaze distribution. Conditions 7.4.1 and 5.1 were also associated with larger pupil dilations than the mono and stereo conditions, showing that these conditions might lead to increased cognitive demand and therefore increased task difficulty. We conclude that sound design should be carefully planned to prevent visual distraction. More surrounding spatialized sounds may lead to more distraction and more difficulty in following audiovisual contents than less distributed sounds. We propose that sound spatialization and soundtrack design should be adapted to the audiovisual content and the task at hand, varying in immersiveness accordingly.
{"title":"Surround sound spreads visual attention and increases cognitive effort in immersive media reproductions","authors":"Catarina Mendonça, Victoria Korshunova","doi":"10.1145/3411109.3411118","DOIUrl":"https://doi.org/10.1145/3411109.3411118","url":null,"abstract":"The goal of this study was to explore the effects of different spatial sound configurations on visual attention and cognitive effort in an immersive environment. For that purpose, different groups of people were exposed to the same immersive video, but with different soundtrack conditions: mono, stereo, 5.1 and 7.4.1. The different sound conditions consisted of different artistic adaptations of the same soundtrack. During the visualization of the video, participants wore an eye-tracking device and were asked to perform a counting task. Gaze direction and pupil dilation metrics were obtained, as measures of attention and cognitive effort. Results demonstrate that the conditions 5.1 and 7.4.1 were associated with larger distributions of the visual attention, with subjects spending more time gazing at task-irrelevant areas on the screen. The sound condition which led to more concentrated attention on the task-relevant area was mono. The wider the spatial sound configuration, the greater the gaze distribution. Conditions 7.4.1 and 5.1 were also associated with larger pupil dilations than the mono and stereo conditions, showing that these conditions might lead to increased cognitive demand and therefore increased task difficulty. We conclude that sound design should be carefully planned to prevent visual distraction. More surrounding spatialized sounds may lead to more distraction and more difficulty in following audiovisual contents than less distributed sounds. We propose that sound spatialization and soundtrack design should be adapted to the audiovisual content and the task at hand, varying in immersiveness accordingly.","PeriodicalId":368424,"journal":{"name":"Proceedings of the 15th International Audio Mostly Conference","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133073990","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
There exists a long tradition of incorporating acoustic space as a creative parameter in musical composition and performance. This creative potential has been extended by way of modern sensing and computing technology which allows the position of the listener to act as an input to interactive musical works in immersive, digital environments. Furthermore, the sophistication of sensing technology has reached a point where barriers to implementing these digital interactive musical systems in the physical world are dissolving. In this research we have set out to understand what new modes of artistic performance might be enabled by these interactive spatial musical systems, and what the analysis of these systems can tell us about the compositional principles of arranging musical elements in space as well as time. We have applied a practice-based approach, leveraging processes of software development, composition, and performance to create a complete system for composing and performing what we refer to as spatial metacompositions. The system is tested at scale in the realisation of a musical work based upon the path of a sailplane in flight. Analysis of the work and the supporting system leads us to suggest opportunities exist for extending existing intermodal composition theory through the analysis of audiovisual renderings of performed spatial works. We also point to unique challenges posed by spatial arrangement, such as effective strategies for structuring musical notes in three dimensions as to produce strong harmonic movement. Beyond enabling new modes of artistic expression, the understanding garnered from these musical structures may help inform a more generalisable approach to non-linear composition, leveraging virtual representations of musical space that respond to arbitrary input data.
{"title":"Composing in spacetime with rainbows: spatial metacomposition in the real world","authors":"Robert S. Jarvis, D. Verhagen","doi":"10.1145/3411109.3411136","DOIUrl":"https://doi.org/10.1145/3411109.3411136","url":null,"abstract":"There exists a long tradition of incorporating acoustic space as a creative parameter in musical composition and performance. This creative potential has been extended by way of modern sensing and computing technology which allows the position of the listener to act as an input to interactive musical works in immersive, digital environments. Furthermore, the sophistication of sensing technology has reached a point where barriers to implementing these digital interactive musical systems in the physical world are dissolving. In this research we have set out to understand what new modes of artistic performance might be enabled by these interactive spatial musical systems, and what the analysis of these systems can tell us about the compositional principles of arranging musical elements in space as well as time. We have applied a practice-based approach, leveraging processes of software development, composition, and performance to create a complete system for composing and performing what we refer to as spatial metacompositions. The system is tested at scale in the realisation of a musical work based upon the path of a sailplane in flight. Analysis of the work and the supporting system leads us to suggest opportunities exist for extending existing intermodal composition theory through the analysis of audiovisual renderings of performed spatial works. We also point to unique challenges posed by spatial arrangement, such as effective strategies for structuring musical notes in three dimensions as to produce strong harmonic movement. Beyond enabling new modes of artistic expression, the understanding garnered from these musical structures may help inform a more generalisable approach to non-linear composition, leveraging virtual representations of musical space that respond to arbitrary input data.","PeriodicalId":368424,"journal":{"name":"Proceedings of the 15th International Audio Mostly Conference","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128667352","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The investigation that follows presents the results of a series of workshops with professional musicians and music producers. The work here elicits requirements for musicians in terms of software systems. The scope here explores how to design systems to support creativity and collaboration while maintaining a usable system - one which is effective, efficient and satisfies the user. The format models that of similar workshops, where a three-pronged approach is taken to focus on three different types of creativity: exploratory, combinatorial and transformational approaches. Participants describe a story that defines different user roles and expectations. Focus groups help to refine and combine the existing experiences and begin identify ways in which systems can be made more usable, and support more creative ways of working. We consider the broader consideration of usability, including defining and describing different user types and how their views of usability may differ or even be at odds. Our findings show that while existing systems are very good at supporting traditional usability metrics, they may not consider the broader implications of a considered and holistic user experience.
{"title":"Breaking the workflow: Design heuristics to support the development of usable digital audio production tools: framing usability heuristics for contemporary purposes","authors":"S. McGrath","doi":"10.1145/3411109.3411133","DOIUrl":"https://doi.org/10.1145/3411109.3411133","url":null,"abstract":"The investigation that follows presents the results of a series of workshops with professional musicians and music producers. The work here elicits requirements for musicians in terms of software systems. The scope here explores how to design systems to support creativity and collaboration while maintaining a usable system - one which is effective, efficient and satisfies the user. The format models that of similar workshops, where a three-pronged approach is taken to focus on three different types of creativity: exploratory, combinatorial and transformational approaches. Participants describe a story that defines different user roles and expectations. Focus groups help to refine and combine the existing experiences and begin identify ways in which systems can be made more usable, and support more creative ways of working. We consider the broader consideration of usability, including defining and describing different user types and how their views of usability may differ or even be at odds. Our findings show that while existing systems are very good at supporting traditional usability metrics, they may not consider the broader implications of a considered and holistic user experience.","PeriodicalId":368424,"journal":{"name":"Proceedings of the 15th International Audio Mostly Conference","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117273324","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Previous studies have shown that movement-inducing properties of music largely depend on the rhythmic complexity of the stimuli. However, little is known about how simple isochronous beat patterns differ from more complex rhythmic structures in their effect on body movement. In this paper we study spontaneous movement of 98 participants instructed to stand as still as possible for 7 minutes while listening to silence and randomised sound excerpts: isochronous drumbeats and complex drum patterns, each at three different tempi (90, 120, 140 BPM). The participants' head movement was recorded with an optical motion capture system. We found that on average participants moved more during the sound stimuli than in silence, which confirms the results from our previous studies. Moreover, the stimulus with complex drum patterns elicited more movement when compared to the isochronous drum beats. Across different tempi, the participants moved most at 120 BPM for the average of both types of stimuli. For the isochronous drumbeats, however, their movement was highest at 140 BPM. These results can contribute to our understanding of the interplay between rhythmic complexity, tempo and music-induced movement.
{"title":"Standstill to the 'beat': differences in involuntary movement responses to simple and complex rhythms","authors":"Agata Zelechowska, V. E. G. Sánchez, A. Jensenius","doi":"10.1145/3411109.3411139","DOIUrl":"https://doi.org/10.1145/3411109.3411139","url":null,"abstract":"Previous studies have shown that movement-inducing properties of music largely depend on the rhythmic complexity of the stimuli. However, little is known about how simple isochronous beat patterns differ from more complex rhythmic structures in their effect on body movement. In this paper we study spontaneous movement of 98 participants instructed to stand as still as possible for 7 minutes while listening to silence and randomised sound excerpts: isochronous drumbeats and complex drum patterns, each at three different tempi (90, 120, 140 BPM). The participants' head movement was recorded with an optical motion capture system. We found that on average participants moved more during the sound stimuli than in silence, which confirms the results from our previous studies. Moreover, the stimulus with complex drum patterns elicited more movement when compared to the isochronous drum beats. Across different tempi, the participants moved most at 120 BPM for the average of both types of stimuli. For the isochronous drumbeats, however, their movement was highest at 140 BPM. These results can contribute to our understanding of the interplay between rhythmic complexity, tempo and music-induced movement.","PeriodicalId":368424,"journal":{"name":"Proceedings of the 15th International Audio Mostly Conference","volume":"88 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132102065","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}