Sampling the considerable literature1,2,3,4,5,6 about loudspeaker arrays, it’s noticed that the focus is often narrowed down to one specific technology. In particular, beamforming and Wave Field Synthesis (WFS) are covered in separate studies and seem to be unrelated, or at best only remotely related, topics. Both will be introduced briefly. Beamforming is a spatial filtering technique used to aim sound in a specific direction. There are many benefits, which include enhanced power efficiency, improved uniformity of coverage, increased direct-to-reverberant ratio, and reduced sound spill. Beamforming techniques can be broadly divided into two categories; mechanical and electronic beamforming, used in curved line arrays and steered column speakers, respectively. WFS is a spatial audio rendering method, capable of delivering a physically correct reproduction of the auditory scene. Depending on the practical implementation, the loudspeakers are distributed along a horizontal line or across a (vertical) plane. Admittedly, at first glance these technologies seem to have very little in common, but from a wave perspective, it’s obvious that they are closely intertwined. More importantly, both technologies can benefit from each other. Principles and features which are considered to be unique for one technology can be incorporated into the other, and vice versa. Using Matrix Arrays, 3D sound fields can be reproduced and controlled precisely in every direction. This opens up new ways for creating and controlling sound in functional as well as creative applications.
{"title":"LOUDSPEAKER MATRIX ARRAYS CHALLENGING THE WAY WE CREATE AND CONTROL SOUND","authors":"EW Start","doi":"10.25144/14145","DOIUrl":"https://doi.org/10.25144/14145","url":null,"abstract":"Sampling the considerable literature1,2,3,4,5,6 about loudspeaker arrays, it’s noticed that the focus is often narrowed down to one specific technology. In particular, beamforming and Wave Field Synthesis (WFS) are covered in separate studies and seem to be unrelated, or at best only remotely related, topics. Both will be introduced briefly. Beamforming is a spatial filtering technique used to aim sound in a specific direction. There are many benefits, which include enhanced power efficiency, improved uniformity of coverage, increased direct-to-reverberant ratio, and reduced sound spill. Beamforming techniques can be broadly divided into two categories; mechanical and electronic beamforming, used in curved line arrays and steered column speakers, respectively. WFS is a spatial audio rendering method, capable of delivering a physically correct reproduction of the auditory scene. Depending on the practical implementation, the loudspeakers are distributed along a horizontal line or across a (vertical) plane. Admittedly, at first glance these technologies seem to have very little in common, but from a wave perspective, it’s obvious that they are closely intertwined. More importantly, both technologies can benefit from each other. Principles and features which are considered to be unique for one technology can be incorporated into the other, and vice versa. Using Matrix Arrays, 3D sound fields can be reproduced and controlled precisely in every direction. This opens up new ways for creating and controlling sound in functional as well as creative applications.","PeriodicalId":186129,"journal":{"name":"Reproduced Sound 2022","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125933910","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Large musical events have become increasingly popular in the last fifty years. It is now not uncommon to have indoor shows in excess of 10,000 people, and open-air events of 30,000 people or more. These events, nevertheless, present technical challenges that have only begun to be solved in the last hundred years, with the introduction of sound reinforcement systems, electric lighting and now video/display technologies. However, these technologies present an artificial link to the performance that requires an understanding of both the audience's expectations as well as the technologies' abilities and limitations. Although many of these abilities and limitations are well documented, the audience's responses to them are less so. This paper introduces research primarily into audience auditory responses but at a subconscious level. By investigating these responses, it is hoped to find a commonality amongst audiences, from which better-informed metrics can be derived.
{"title":"USING COGNITIVE PSYCHOLOGY AND NEUROSCIENCE TO BETTER INFORM SOUND SYSTEM DESIGN AT LARGE MUSICAL EVENTS","authors":"J. Burton, A. Hill","doi":"10.25144/14148","DOIUrl":"https://doi.org/10.25144/14148","url":null,"abstract":"Large musical events have become increasingly popular in the last fifty years. It is now not uncommon to have indoor shows in excess of 10,000 people, and open-air events of 30,000 people or more. These events, nevertheless, present technical challenges that have only begun to be solved in the last hundred years, with the introduction of sound reinforcement systems, electric lighting and now video/display technologies. However, these technologies present an artificial link to the performance that requires an understanding of both the audience's expectations as well as the technologies' abilities and limitations. Although many of these abilities and limitations are well documented, the audience's responses to them are less so. This paper introduces research primarily into audience auditory responses but at a subconscious level. By investigating these responses, it is hoped to find a commonality amongst audiences, from which better-informed metrics can be derived.","PeriodicalId":186129,"journal":{"name":"Reproduced Sound 2022","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130681919","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A. Hill, J. Mulder, J. Burton, M. Kok, M. Lawrence
{"title":"A CRITICAL ANALYSIS OF SOUND LEVEL MONITORING METHODS AT LIVE EVENTS","authors":"A. Hill, J. Mulder, J. Burton, M. Kok, M. Lawrence","doi":"10.25144/14142","DOIUrl":"https://doi.org/10.25144/14142","url":null,"abstract":"","PeriodicalId":186129,"journal":{"name":"Reproduced Sound 2022","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122265774","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}