Thomas McKenzie, Nils Meyer-Kahlen, C. Hold, Sebastian J. Schlecht, V. Pulkki
To auralise a room’s acoustics in six degrees-of-freedom (6DoF) virtual reality (VR), a dense set of spatial room impulse response (SRIR) measurements is required, so interpolating between a sparse set is desirable. This paper studies the auralisation of room transitions by proposing a baseline interpolation method for higher-order Ambisonic SRIRs and evaluating it in VR. The presented method is simple yet applicable to coupled rooms and room transitions. It is based on linear interpolation with RMS compensation, though direct sound, early reflec-tions and late reverberation are processed separately, whereby the input direct sounds are first steered to the relative direction-of-arrival before summation and interpolated early reflections are directionally equalised. The proposed method is first evaluated numerically, which demonstrates its improvements over a basic linear interpolation. A listening test is then conducted in 6DoF VR, to assess the density of SRIR measurements needed in order to plausibly auralise a room transition using the presented interpolation method. The results suggest that, given the tested scenario, a 50 cm to 1 m inter-measurement distance can be perceptually sufficient.
{"title":"Auralization of Measured Room Transitions in Virtual Reality","authors":"Thomas McKenzie, Nils Meyer-Kahlen, C. Hold, Sebastian J. Schlecht, V. Pulkki","doi":"10.17743/jaes.2022.0084","DOIUrl":"https://doi.org/10.17743/jaes.2022.0084","url":null,"abstract":"To auralise a room’s acoustics in six degrees-of-freedom (6DoF) virtual reality (VR), a dense set of spatial room impulse response (SRIR) measurements is required, so interpolating between a sparse set is desirable. This paper studies the auralisation of room transitions by proposing a baseline interpolation method for higher-order Ambisonic SRIRs and evaluating it in VR. The presented method is simple yet applicable to coupled rooms and room transitions. It is based on linear interpolation with RMS compensation, though direct sound, early reflec-tions and late reverberation are processed separately, whereby the input direct sounds are first steered to the relative direction-of-arrival before summation and interpolated early reflections are directionally equalised. The proposed method is first evaluated numerically, which demonstrates its improvements over a basic linear interpolation. A listening test is then conducted in 6DoF VR, to assess the density of SRIR measurements needed in order to plausibly auralise a room transition using the presented interpolation method. The results suggest that, given the tested scenario, a 50 cm to 1 m inter-measurement distance can be perceptually sufficient.","PeriodicalId":50008,"journal":{"name":"Journal of the Audio Engineering Society","volume":" ","pages":""},"PeriodicalIF":1.4,"publicationDate":"2023-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43969491","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Nils Meyer-Kahlen, Miranda Kastemaa, Sebastian J. Schlecht, T. Lokki
{"title":"Measuring Motion-to-Sound Latency in Virtual Acoustic Rendering Systems","authors":"Nils Meyer-Kahlen, Miranda Kastemaa, Sebastian J. Schlecht, T. Lokki","doi":"10.17743/jaes.2022.0089","DOIUrl":"https://doi.org/10.17743/jaes.2022.0089","url":null,"abstract":"","PeriodicalId":50008,"journal":{"name":"Journal of the Audio Engineering Society","volume":" ","pages":""},"PeriodicalIF":1.4,"publicationDate":"2023-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48943865","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Patrick Cairns, Anthony Hunt, D. Johnston, J. Cooper, Ben Lee, H. Daffern, G. Kearney
{"title":"Evaluation of Metaverse Music Performance With BBC Maida Vale Recording Studios","authors":"Patrick Cairns, Anthony Hunt, D. Johnston, J. Cooper, Ben Lee, H. Daffern, G. Kearney","doi":"10.17743/jaes.2022.0086","DOIUrl":"https://doi.org/10.17743/jaes.2022.0086","url":null,"abstract":"","PeriodicalId":50008,"journal":{"name":"Journal of the Audio Engineering Society","volume":" ","pages":""},"PeriodicalIF":1.4,"publicationDate":"2023-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48519263","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Rasmus Lundby Pedersen, L. Picinali, Nynne Kajs, F. Patou
The lack of ecological validity in clinical assessment, as well as the challenge of investigat- ing multimodal sensory processing, remain key challenges in hearing science. Virtual Reality (VR) can support hearing research in these domains by combining experimental control with situational realism. However, the development of VR-based experiments is traditionally highly resource demanding, which places a significant entry barrier for basic and clinical researchers looking to embrace VR as the research tool of choice. The Oticon Medical Virtual Reality (OMVR) experiment platform fast-tracks the creation or adaptation of hearing research experi- ment templates to be used to explore areas such as binaural spatial hearing, multimodal sensory integration, cognitive hearing behavioral strategies, auditory-visual training, etc. In this paper, the OMVR’s functionalities, architecture, and key elements of implementation are presented, important performance indicators are characterized, and a use-case perceptual evaluation is presented.
{"title":"Virtual-Reality-Based Research in Hearing Science: A Platforming Approach","authors":"Rasmus Lundby Pedersen, L. Picinali, Nynne Kajs, F. Patou","doi":"10.17743/jaes.2022.0083","DOIUrl":"https://doi.org/10.17743/jaes.2022.0083","url":null,"abstract":"The lack of ecological validity in clinical assessment, as well as the challenge of investigat- ing multimodal sensory processing, remain key challenges in hearing science. Virtual Reality (VR) can support hearing research in these domains by combining experimental control with situational realism. However, the development of VR-based experiments is traditionally highly resource demanding, which places a significant entry barrier for basic and clinical researchers looking to embrace VR as the research tool of choice. The Oticon Medical Virtual Reality (OMVR) experiment platform fast-tracks the creation or adaptation of hearing research experi- ment templates to be used to explore areas such as binaural spatial hearing, multimodal sensory integration, cognitive hearing behavioral strategies, auditory-visual training, etc. In this paper, the OMVR’s functionalities, architecture, and key elements of implementation are presented, important performance indicators are characterized, and a use-case perceptual evaluation is presented.","PeriodicalId":50008,"journal":{"name":"Journal of the Audio Engineering Society","volume":"1 1","pages":""},"PeriodicalIF":1.4,"publicationDate":"2023-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41360635","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, the Sonic Interactions in Virtual Environments (SIVE) toolkit, a virtual reality (VR) environment for building musical instruments using physical models, is presented. The audio engine of the toolkit is based on finite-difference time-domain (FDTD) methods and works in a modular fashion. The authors show how the toolkit is built and how it can be imported in Unity to create VR musical instruments, and future developments and possible applications are discussed.
{"title":"The Sonic Interactions in Virtual Environments (SIVE) Toolkit","authors":"Silvin Willemsen, Helmer Nuijens, Titas Lasickas, Stefania Serafin","doi":"10.17743/jaes.2022.0082","DOIUrl":"https://doi.org/10.17743/jaes.2022.0082","url":null,"abstract":"In this paper, the Sonic Interactions in Virtual Environments (SIVE) toolkit, a virtual reality (VR) environment for building musical instruments using physical models, is presented. The audio engine of the toolkit is based on finite-difference time-domain (FDTD) methods and works in a modular fashion. The authors show how the toolkit is built and how it can be imported in Unity to create VR musical instruments, and future developments and possible applications are discussed.","PeriodicalId":50008,"journal":{"name":"Journal of the Audio Engineering Society","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135494498","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Théophile Dupré, Sébastien Denjean, M. Aramaki, R. Kronland-Martinet
With the development of electric motor vehicles, the domain of automotive sound design addresses new issues, and is now concerned by creating suitable and pleasant soundscapes inside the vehicle. For instance, the absence of predominant engine sound changes the driver perception of the dynamic of his car. Previous studies proposed relevant sonification strategies to augment the interior sound environment by bringing back vehicle dynamics with synthetic auditory cues. Yet, users report a lack of blending with the existing soundscape. In this study, we analyze acoustical and perceptual spatial characteristics of the car soundscape and show that that the spatial attributes of sound sources are fundamental to improve the perceptual coherency of the global environment.
{"title":"Spatial Integration of Dynamic Auditory Feedback in Electric Vehicle Interior","authors":"Théophile Dupré, Sébastien Denjean, M. Aramaki, R. Kronland-Martinet","doi":"10.17743/jaes.2022.0087","DOIUrl":"https://doi.org/10.17743/jaes.2022.0087","url":null,"abstract":"With the development of electric motor vehicles, the domain of automotive sound design addresses new issues, and is now concerned by creating suitable and pleasant soundscapes inside the vehicle. For instance, the absence of predominant engine sound changes the driver perception of the dynamic of his car. Previous studies proposed relevant sonification strategies to augment the interior sound environment by bringing back vehicle dynamics with synthetic auditory cues. Yet, users report a lack of blending with the existing soundscape. In this study, we analyze acoustical and perceptual spatial characteristics of the car soundscape and show that that the spatial attributes of sound sources are fundamental to improve the perceptual coherency of the global environment.","PeriodicalId":50008,"journal":{"name":"Journal of the Audio Engineering Society","volume":" ","pages":""},"PeriodicalIF":1.4,"publicationDate":"2023-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42902625","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Isaac Engel, Rapolas Daugintis, Thibault Vicente, Aidan O. T. Hogg, J. Pauwels, Arnaud J. Tournier, Lorenzo Picinali
{"title":"The SONICOM HRTF Dataset","authors":"Isaac Engel, Rapolas Daugintis, Thibault Vicente, Aidan O. T. Hogg, J. Pauwels, Arnaud J. Tournier, Lorenzo Picinali","doi":"10.17743/jaes.2022.0066","DOIUrl":"https://doi.org/10.17743/jaes.2022.0066","url":null,"abstract":"","PeriodicalId":50008,"journal":{"name":"Journal of the Audio Engineering Society","volume":" ","pages":""},"PeriodicalIF":1.4,"publicationDate":"2023-05-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41421529","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Florian Klein, Tatiana Surdu, Lukas Treybig, S. Werner
{"title":"The Ability to Memorize Acoustic Features in a Discrimination Task","authors":"Florian Klein, Tatiana Surdu, Lukas Treybig, S. Werner","doi":"10.17743/jaes.2022.0073","DOIUrl":"https://doi.org/10.17743/jaes.2022.0073","url":null,"abstract":"","PeriodicalId":50008,"journal":{"name":"Journal of the Audio Engineering Society","volume":" ","pages":""},"PeriodicalIF":1.4,"publicationDate":"2023-05-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44964886","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A reconstruction-based rendering approach is explored for the task of imposing the spatial characteristics of a measured space onto a monophonic signal while also reproducing it over a target playback setup. The foundation of this study is a parametric rendering framework, which can operate either on arbitrary microphone array room impulse responses (RIRs) or Ambisonic RIRs. Spatial filtering techniques are used to decompose the input RIR into individual reflections and anisotropic diffuse reverberation, which are reproduced using dedicated rendering strategies. The proposed approach operates by considering several hypotheses involving different rendering configurations and thereafter determining which hypothesis reconstructs the input RIR most faithfully. With regard to the present study, these hypotheses involved considering different potential reflection numbers. Once the optimal number of reflections to render has been determined over time and frequency, the array directional responses used to reconstruct the input RIR are substituted with spatialization gains for the target playback setup. The results of formal listening experiments suggest that the proposed approach produces renderings that are perceptually more similar to reference responses, when compared with the use of an established subspace-based detection algorithm. The proposed approach also demonstrates similar or better performance than that achieved with existing state-of-the-art methods.
{"title":"Spatial Reconstruction-Based Rendering of Microphone Array Room Impulse Responses","authors":"L. McCormack, Nils Meyer-Kahlen, A. Politis","doi":"10.17743/jaes.2022.0072","DOIUrl":"https://doi.org/10.17743/jaes.2022.0072","url":null,"abstract":"A reconstruction-based rendering approach is explored for the task of imposing the spatial characteristics of a measured space onto a monophonic signal while also reproducing it over a target playback setup. The foundation of this study is a parametric rendering framework, which can operate either on arbitrary microphone array room impulse responses (RIRs) or Ambisonic RIRs. Spatial filtering techniques are used to decompose the input RIR into individual reflections and anisotropic diffuse reverberation, which are reproduced using dedicated rendering strategies. The proposed approach operates by considering several hypotheses involving different rendering configurations and thereafter determining which hypothesis reconstructs the input RIR most faithfully. With regard to the present study, these hypotheses involved considering different potential reflection numbers. Once the optimal number of reflections to render has been determined over time and frequency, the array directional responses used to reconstruct the input RIR are substituted with spatialization gains for the target playback setup. The results of formal listening experiments suggest that the proposed approach produces renderings that are perceptually more similar to reference responses, when compared with the use of an established subspace-based detection algorithm. The proposed approach also demonstrates similar or better performance than that achieved with existing state-of-the-art methods.","PeriodicalId":50008,"journal":{"name":"Journal of the Audio Engineering Society","volume":" ","pages":""},"PeriodicalIF":1.4,"publicationDate":"2023-05-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41791005","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Musical instruments are complex sound sources that exhibit directivity patterns that not only vary depending on the frequency, but can also change as a function of the played tone. It is yet unclear whether the directivity variation as a function of the played tone leads to a perceptible difference compared to an auralization that uses an averaged directivity pattern. This paper examines the directivity of 38 musical instruments from a publicly available database and then selects three representative instruments among those with similar radiation characteristics (oboe, violin, and trumpet). To evaluate the listeners’ ability to perceive a difference between auralizations of virtual environments using tone-dependent and averaged directivities, a listening test was conducted using the directivity patterns of the three selected instruments in both anechoic and reverberant conditions. The results show that, in anechoic conditions, listeners can reliably detect differences between the tone-dependent and averaged directivities for the oboe but not for the violin or the trumpet. Nevertheless, in reverberant conditions, listeners can distinguish tone-dependent directivity from averaged directivity for all instruments under study.
{"title":"Perceptual Significance of Tone-Dependent Directivity Patterns of Musical Instruments","authors":"Andrea Corcuera, V. Chatziioannou, J. Ahrens","doi":"10.17743/jaes.2022.0076","DOIUrl":"https://doi.org/10.17743/jaes.2022.0076","url":null,"abstract":"Musical instruments are complex sound sources that exhibit directivity patterns that not only vary depending on the frequency, but can also change as a function of the played tone. It is yet unclear whether the directivity variation as a function of the played tone leads to a perceptible difference compared to an auralization that uses an averaged directivity pattern. This paper examines the directivity of 38 musical instruments from a publicly available database and then selects three representative instruments among those with similar radiation characteristics (oboe, violin, and trumpet). To evaluate the listeners’ ability to perceive a difference between auralizations of virtual environments using tone-dependent and averaged directivities, a listening test was conducted using the directivity patterns of the three selected instruments in both anechoic and reverberant conditions. The results show that, in anechoic conditions, listeners can reliably detect differences between the tone-dependent and averaged directivities for the oboe but not for the violin or the trumpet. Nevertheless, in reverberant conditions, listeners can distinguish tone-dependent directivity from averaged directivity for all instruments under study.","PeriodicalId":50008,"journal":{"name":"Journal of the Audio Engineering Society","volume":" ","pages":""},"PeriodicalIF":1.4,"publicationDate":"2023-05-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43673443","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}