首页 > 最新文献

Journal of the Audio Engineering Society最新文献

英文 中文
Auralization of Measured Room Transitions in Virtual Reality 虚拟现实中测量房间过渡的听觉化
IF 1.4 4区 工程技术 Q3 ACOUSTICS Pub Date : 2023-06-06 DOI: 10.17743/jaes.2022.0084
Thomas McKenzie, Nils Meyer-Kahlen, C. Hold, Sebastian J. Schlecht, V. Pulkki
To auralise a room’s acoustics in six degrees-of-freedom (6DoF) virtual reality (VR), a dense set of spatial room impulse response (SRIR) measurements is required, so interpolating between a sparse set is desirable. This paper studies the auralisation of room transitions by proposing a baseline interpolation method for higher-order Ambisonic SRIRs and evaluating it in VR. The presented method is simple yet applicable to coupled rooms and room transitions. It is based on linear interpolation with RMS compensation, though direct sound, early reflec-tions and late reverberation are processed separately, whereby the input direct sounds are first steered to the relative direction-of-arrival before summation and interpolated early reflections are directionally equalised. The proposed method is first evaluated numerically, which demonstrates its improvements over a basic linear interpolation. A listening test is then conducted in 6DoF VR, to assess the density of SRIR measurements needed in order to plausibly auralise a room transition using the presented interpolation method. The results suggest that, given the tested scenario, a 50 cm to 1 m inter-measurement distance can be perceptually sufficient.
为了在六自由度(6DoF)虚拟现实(VR)中实现房间声学的听觉化,需要一组密集的空间房间脉冲响应(SRIR)测量值,因此需要在稀疏集之间进行插值。本文通过提出一种高阶Ambisonic SRIR的基线插值方法并在VR中对其进行评估,研究了房间转换的听觉化。所提出的方法简单,但适用于耦合房间和房间转换。它基于RMS补偿的线性插值,尽管直接声音、早期反射和后期混响是单独处理的,因此输入的直接声音在求和和和插值的早期反射定向均衡之前首先被引导到相对到达方向。首次对所提出的方法进行了数值评估,证明了其对基本线性插值的改进。然后在6DoF VR中进行听力测试,以评估所需的SRIR测量密度,以便使用所提出的插值方法合理地听觉化房间转换。结果表明,在测试场景中,50厘米到1米的相互测量距离在感知上是足够的。
{"title":"Auralization of Measured Room Transitions in Virtual Reality","authors":"Thomas McKenzie, Nils Meyer-Kahlen, C. Hold, Sebastian J. Schlecht, V. Pulkki","doi":"10.17743/jaes.2022.0084","DOIUrl":"https://doi.org/10.17743/jaes.2022.0084","url":null,"abstract":"To auralise a room’s acoustics in six degrees-of-freedom (6DoF) virtual reality (VR), a dense set of spatial room impulse response (SRIR) measurements is required, so interpolating between a sparse set is desirable. This paper studies the auralisation of room transitions by proposing a baseline interpolation method for higher-order Ambisonic SRIRs and evaluating it in VR. The presented method is simple yet applicable to coupled rooms and room transitions. It is based on linear interpolation with RMS compensation, though direct sound, early reflec-tions and late reverberation are processed separately, whereby the input direct sounds are first steered to the relative direction-of-arrival before summation and interpolated early reflections are directionally equalised. The proposed method is first evaluated numerically, which demonstrates its improvements over a basic linear interpolation. A listening test is then conducted in 6DoF VR, to assess the density of SRIR measurements needed in order to plausibly auralise a room transition using the presented interpolation method. The results suggest that, given the tested scenario, a 50 cm to 1 m inter-measurement distance can be perceptually sufficient.","PeriodicalId":50008,"journal":{"name":"Journal of the Audio Engineering Society","volume":" ","pages":""},"PeriodicalIF":1.4,"publicationDate":"2023-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43969491","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Measuring Motion-to-Sound Latency in Virtual Acoustic Rendering Systems 测量虚拟声学渲染系统中动作到声音的延迟
IF 1.4 4区 工程技术 Q3 ACOUSTICS Pub Date : 2023-06-06 DOI: 10.17743/jaes.2022.0089
Nils Meyer-Kahlen, Miranda Kastemaa, Sebastian J. Schlecht, T. Lokki
{"title":"Measuring Motion-to-Sound Latency in Virtual Acoustic Rendering Systems","authors":"Nils Meyer-Kahlen, Miranda Kastemaa, Sebastian J. Schlecht, T. Lokki","doi":"10.17743/jaes.2022.0089","DOIUrl":"https://doi.org/10.17743/jaes.2022.0089","url":null,"abstract":"","PeriodicalId":50008,"journal":{"name":"Journal of the Audio Engineering Society","volume":" ","pages":""},"PeriodicalIF":1.4,"publicationDate":"2023-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48943865","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Evaluation of Metaverse Music Performance With BBC Maida Vale Recording Studios BBC Maida Vale录音室对元宇宙音乐表演的评估
IF 1.4 4区 工程技术 Q3 ACOUSTICS Pub Date : 2023-06-06 DOI: 10.17743/jaes.2022.0086
Patrick Cairns, Anthony Hunt, D. Johnston, J. Cooper, Ben Lee, H. Daffern, G. Kearney
{"title":"Evaluation of Metaverse Music Performance With BBC Maida Vale Recording Studios","authors":"Patrick Cairns, Anthony Hunt, D. Johnston, J. Cooper, Ben Lee, H. Daffern, G. Kearney","doi":"10.17743/jaes.2022.0086","DOIUrl":"https://doi.org/10.17743/jaes.2022.0086","url":null,"abstract":"","PeriodicalId":50008,"journal":{"name":"Journal of the Audio Engineering Society","volume":" ","pages":""},"PeriodicalIF":1.4,"publicationDate":"2023-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48519263","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Virtual-Reality-Based Research in Hearing Science: A Platforming Approach 基于虚拟现实的听力科学研究:一种平台化方法
IF 1.4 4区 工程技术 Q3 ACOUSTICS Pub Date : 2023-06-06 DOI: 10.17743/jaes.2022.0083
Rasmus Lundby Pedersen, L. Picinali, Nynne Kajs, F. Patou
The lack of ecological validity in clinical assessment, as well as the challenge of investigat- ing multimodal sensory processing, remain key challenges in hearing science. Virtual Reality (VR) can support hearing research in these domains by combining experimental control with situational realism. However, the development of VR-based experiments is traditionally highly resource demanding, which places a significant entry barrier for basic and clinical researchers looking to embrace VR as the research tool of choice. The Oticon Medical Virtual Reality (OMVR) experiment platform fast-tracks the creation or adaptation of hearing research experi- ment templates to be used to explore areas such as binaural spatial hearing, multimodal sensory integration, cognitive hearing behavioral strategies, auditory-visual training, etc. In this paper, the OMVR’s functionalities, architecture, and key elements of implementation are presented, important performance indicators are characterized, and a use-case perceptual evaluation is presented.
临床评估缺乏生态有效性,以及研究多模式感觉处理的挑战,仍然是听力科学的关键挑战。虚拟现实(VR)可以通过将实验控制与情境现实相结合来支持这些领域的听力研究。然而,基于虚拟现实的实验的开发传统上对资源要求很高,这对希望将虚拟现实作为研究工具的基础和临床研究人员来说是一个巨大的进入障碍。奥替康医疗虚拟现实(OMVR)实验平台快速跟踪听力研究实验模板的创建或改编,用于探索双耳空间听力、多模式感觉整合、认知听力行为策略、听觉视觉训练等领域,给出了实现的关键要素,对重要的性能指标进行了表征,并给出了用例感知评估。
{"title":"Virtual-Reality-Based Research in Hearing Science: A Platforming Approach","authors":"Rasmus Lundby Pedersen, L. Picinali, Nynne Kajs, F. Patou","doi":"10.17743/jaes.2022.0083","DOIUrl":"https://doi.org/10.17743/jaes.2022.0083","url":null,"abstract":"The lack of ecological validity in clinical assessment, as well as the challenge of investigat- ing multimodal sensory processing, remain key challenges in hearing science. Virtual Reality (VR) can support hearing research in these domains by combining experimental control with situational realism. However, the development of VR-based experiments is traditionally highly resource demanding, which places a significant entry barrier for basic and clinical researchers looking to embrace VR as the research tool of choice. The Oticon Medical Virtual Reality (OMVR) experiment platform fast-tracks the creation or adaptation of hearing research experi- ment templates to be used to explore areas such as binaural spatial hearing, multimodal sensory integration, cognitive hearing behavioral strategies, auditory-visual training, etc. In this paper, the OMVR’s functionalities, architecture, and key elements of implementation are presented, important performance indicators are characterized, and a use-case perceptual evaluation is presented.","PeriodicalId":50008,"journal":{"name":"Journal of the Audio Engineering Society","volume":"1 1","pages":""},"PeriodicalIF":1.4,"publicationDate":"2023-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41360635","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The Sonic Interactions in Virtual Environments (SIVE) Toolkit 虚拟环境中的声音交互(SIVE)工具包
4区 工程技术 Q3 ACOUSTICS Pub Date : 2023-06-06 DOI: 10.17743/jaes.2022.0082
Silvin Willemsen, Helmer Nuijens, Titas Lasickas, Stefania Serafin
In this paper, the Sonic Interactions in Virtual Environments (SIVE) toolkit, a virtual reality (VR) environment for building musical instruments using physical models, is presented. The audio engine of the toolkit is based on finite-difference time-domain (FDTD) methods and works in a modular fashion. The authors show how the toolkit is built and how it can be imported in Unity to create VR musical instruments, and future developments and possible applications are discussed.
本文介绍了虚拟环境中的声音交互(SIVE)工具包,这是一个用于使用物理模型构建乐器的虚拟现实(VR)环境。该工具包的音频引擎基于时域有限差分(FDTD)方法,并以模块化的方式工作。作者展示了如何构建工具包以及如何将其导入Unity以创建VR乐器,并讨论了未来的发展和可能的应用。
{"title":"The Sonic Interactions in Virtual Environments (SIVE) Toolkit","authors":"Silvin Willemsen, Helmer Nuijens, Titas Lasickas, Stefania Serafin","doi":"10.17743/jaes.2022.0082","DOIUrl":"https://doi.org/10.17743/jaes.2022.0082","url":null,"abstract":"In this paper, the Sonic Interactions in Virtual Environments (SIVE) toolkit, a virtual reality (VR) environment for building musical instruments using physical models, is presented. The audio engine of the toolkit is based on finite-difference time-domain (FDTD) methods and works in a modular fashion. The authors show how the toolkit is built and how it can be imported in Unity to create VR musical instruments, and future developments and possible applications are discussed.","PeriodicalId":50008,"journal":{"name":"Journal of the Audio Engineering Society","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135494498","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Spatial Integration of Dynamic Auditory Feedback in Electric Vehicle Interior 电动汽车内部动态听觉反馈的空间集成
IF 1.4 4区 工程技术 Q3 ACOUSTICS Pub Date : 2023-06-06 DOI: 10.17743/jaes.2022.0087
Théophile Dupré, Sébastien Denjean, M. Aramaki, R. Kronland-Martinet
With the development of electric motor vehicles, the domain of automotive sound design addresses new issues, and is now concerned by creating suitable and pleasant soundscapes inside the vehicle. For instance, the absence of predominant engine sound changes the driver perception of the dynamic of his car. Previous studies proposed relevant sonification strategies to augment the interior sound environment by bringing back vehicle dynamics with synthetic auditory cues. Yet, users report a lack of blending with the existing soundscape. In this study, we analyze acoustical and perceptual spatial characteristics of the car soundscape and show that that the spatial attributes of sound sources are fundamental to improve the perceptual coherency of the global environment.
随着电动汽车的发展,汽车声音设计领域解决了新的问题,现在人们关注的是在车内创造合适且愉快的声景。例如,没有主要的发动机声音会改变驾驶员对汽车动态的感知。先前的研究提出了相关的声音处理策略,通过使用合成听觉线索恢复车辆动力学来增强内部声音环境。然而,用户报告称,他们缺乏与现有声景的融合。在本研究中,我们分析了汽车声景的声学和感知空间特征,并表明声源的空间属性是提高全球环境感知连贯性的基础。
{"title":"Spatial Integration of Dynamic Auditory Feedback in Electric Vehicle Interior","authors":"Théophile Dupré, Sébastien Denjean, M. Aramaki, R. Kronland-Martinet","doi":"10.17743/jaes.2022.0087","DOIUrl":"https://doi.org/10.17743/jaes.2022.0087","url":null,"abstract":"With the development of electric motor vehicles, the domain of automotive sound design addresses new issues, and is now concerned by creating suitable and pleasant soundscapes inside the vehicle. For instance, the absence of predominant engine sound changes the driver perception of the dynamic of his car. Previous studies proposed relevant sonification strategies to augment the interior sound environment by bringing back vehicle dynamics with synthetic auditory cues. Yet, users report a lack of blending with the existing soundscape. In this study, we analyze acoustical and perceptual spatial characteristics of the car soundscape and show that that the spatial attributes of sound sources are fundamental to improve the perceptual coherency of the global environment.","PeriodicalId":50008,"journal":{"name":"Journal of the Audio Engineering Society","volume":" ","pages":""},"PeriodicalIF":1.4,"publicationDate":"2023-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42902625","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
The SONICOM HRTF Dataset
IF 1.4 4区 工程技术 Q3 ACOUSTICS Pub Date : 2023-05-17 DOI: 10.17743/jaes.2022.0066
Isaac Engel, Rapolas Daugintis, Thibault Vicente, Aidan O. T. Hogg, J. Pauwels, Arnaud J. Tournier, Lorenzo Picinali
{"title":"The SONICOM HRTF Dataset","authors":"Isaac Engel, Rapolas Daugintis, Thibault Vicente, Aidan O. T. Hogg, J. Pauwels, Arnaud J. Tournier, Lorenzo Picinali","doi":"10.17743/jaes.2022.0066","DOIUrl":"https://doi.org/10.17743/jaes.2022.0066","url":null,"abstract":"","PeriodicalId":50008,"journal":{"name":"Journal of the Audio Engineering Society","volume":" ","pages":""},"PeriodicalIF":1.4,"publicationDate":"2023-05-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41421529","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
The Ability to Memorize Acoustic Features in a Discrimination Task 辨别任务中的声学特征记忆能力
IF 1.4 4区 工程技术 Q3 ACOUSTICS Pub Date : 2023-05-17 DOI: 10.17743/jaes.2022.0073
Florian Klein, Tatiana Surdu, Lukas Treybig, S. Werner
{"title":"The Ability to Memorize Acoustic Features in a Discrimination Task","authors":"Florian Klein, Tatiana Surdu, Lukas Treybig, S. Werner","doi":"10.17743/jaes.2022.0073","DOIUrl":"https://doi.org/10.17743/jaes.2022.0073","url":null,"abstract":"","PeriodicalId":50008,"journal":{"name":"Journal of the Audio Engineering Society","volume":" ","pages":""},"PeriodicalIF":1.4,"publicationDate":"2023-05-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44964886","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Spatial Reconstruction-Based Rendering of Microphone Array Room Impulse Responses 基于空间重构的传声器阵列房间脉冲响应绘制
IF 1.4 4区 工程技术 Q3 ACOUSTICS Pub Date : 2023-05-17 DOI: 10.17743/jaes.2022.0072
L. McCormack, Nils Meyer-Kahlen, A. Politis
A reconstruction-based rendering approach is explored for the task of imposing the spatial characteristics of a measured space onto a monophonic signal while also reproducing it over a target playback setup. The foundation of this study is a parametric rendering framework, which can operate either on arbitrary microphone array room impulse responses (RIRs) or Ambisonic RIRs. Spatial filtering techniques are used to decompose the input RIR into individual reflections and anisotropic diffuse reverberation, which are reproduced using dedicated rendering strategies. The proposed approach operates by considering several hypotheses involving different rendering configurations and thereafter determining which hypothesis reconstructs the input RIR most faithfully. With regard to the present study, these hypotheses involved considering different potential reflection numbers. Once the optimal number of reflections to render has been determined over time and frequency, the array directional responses used to reconstruct the input RIR are substituted with spatialization gains for the target playback setup. The results of formal listening experiments suggest that the proposed approach produces renderings that are perceptually more similar to reference responses, when compared with the use of an established subspace-based detection algorithm. The proposed approach also demonstrates similar or better performance than that achieved with existing state-of-the-art methods.
我们探索了一种基于重建的渲染方法,用于将测量空间的空间特征强加到单音信号上,同时在目标播放设置上再现它。本研究的基础是一个参数化渲染框架,它可以在任意麦克风阵列房间脉冲响应(rir)或双声rir上运行。空间滤波技术用于将输入RIR分解为单个反射和各向异性扩散混响,并使用专用渲染策略进行再现。所提出的方法通过考虑涉及不同呈现配置的几个假设,然后确定哪个假设最忠实地重建输入RIR。就目前的研究而言,这些假设涉及到考虑不同的潜在反射数。一旦确定了要呈现的最佳反射数随时间和频率的变化,用于重建输入RIR的阵列方向响应将被目标播放设置的空间化增益所取代。正式聆听实验的结果表明,与使用已建立的基于子空间的检测算法相比,所提出的方法产生的渲染在感知上更类似于参考响应。与现有的最先进的方法相比,所提出的方法也显示出类似或更好的性能。
{"title":"Spatial Reconstruction-Based Rendering of Microphone Array Room Impulse Responses","authors":"L. McCormack, Nils Meyer-Kahlen, A. Politis","doi":"10.17743/jaes.2022.0072","DOIUrl":"https://doi.org/10.17743/jaes.2022.0072","url":null,"abstract":"A reconstruction-based rendering approach is explored for the task of imposing the spatial characteristics of a measured space onto a monophonic signal while also reproducing it over a target playback setup. The foundation of this study is a parametric rendering framework, which can operate either on arbitrary microphone array room impulse responses (RIRs) or Ambisonic RIRs. Spatial filtering techniques are used to decompose the input RIR into individual reflections and anisotropic diffuse reverberation, which are reproduced using dedicated rendering strategies. The proposed approach operates by considering several hypotheses involving different rendering configurations and thereafter determining which hypothesis reconstructs the input RIR most faithfully. With regard to the present study, these hypotheses involved considering different potential reflection numbers. Once the optimal number of reflections to render has been determined over time and frequency, the array directional responses used to reconstruct the input RIR are substituted with spatialization gains for the target playback setup. The results of formal listening experiments suggest that the proposed approach produces renderings that are perceptually more similar to reference responses, when compared with the use of an established subspace-based detection algorithm. The proposed approach also demonstrates similar or better performance than that achieved with existing state-of-the-art methods.","PeriodicalId":50008,"journal":{"name":"Journal of the Audio Engineering Society","volume":" ","pages":""},"PeriodicalIF":1.4,"publicationDate":"2023-05-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41791005","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Perceptual Significance of Tone-Dependent Directivity Patterns of Musical Instruments 乐器音调相关指向性模式的感知意义
IF 1.4 4区 工程技术 Q3 ACOUSTICS Pub Date : 2023-05-17 DOI: 10.17743/jaes.2022.0076
Andrea Corcuera, V. Chatziioannou, J. Ahrens
Musical instruments are complex sound sources that exhibit directivity patterns that not only vary depending on the frequency, but can also change as a function of the played tone. It is yet unclear whether the directivity variation as a function of the played tone leads to a perceptible difference compared to an auralization that uses an averaged directivity pattern. This paper examines the directivity of 38 musical instruments from a publicly available database and then selects three representative instruments among those with similar radiation characteristics (oboe, violin, and trumpet). To evaluate the listeners’ ability to perceive a difference between auralizations of virtual environments using tone-dependent and averaged directivities, a listening test was conducted using the directivity patterns of the three selected instruments in both anechoic and reverberant conditions. The results show that, in anechoic conditions, listeners can reliably detect differences between the tone-dependent and averaged directivities for the oboe but not for the violin or the trumpet. Nevertheless, in reverberant conditions, listeners can distinguish tone-dependent directivity from averaged directivity for all instruments under study.
乐器是一种复杂的声源,其指向性模式不仅随频率而变化,而且还可以随演奏音调而变化。与使用平均方向性模式的听觉化相比,作为播放音调的函数的方向性变化是否导致可感知的差异尚不清楚。本文从一个公开的数据库中检查了38种乐器的指向性,然后从具有相似辐射特性的乐器中选择了三种具有代表性的乐器(双簧管、小提琴和小号)。为了评估听众使用音调相关和平均方向性感知虚拟环境听觉之间差异的能力,在消声和混响条件下,使用三种选定乐器的方向性模式进行了听力测试。结果表明,在无回声条件下,听众可以可靠地检测到双簧管的音调依赖性和平均指向性之间的差异,而小提琴或小号则不能。然而,在混响条件下,听众可以区分所研究的所有乐器的音调相关指向性和平均指向性。
{"title":"Perceptual Significance of Tone-Dependent Directivity Patterns of Musical Instruments","authors":"Andrea Corcuera, V. Chatziioannou, J. Ahrens","doi":"10.17743/jaes.2022.0076","DOIUrl":"https://doi.org/10.17743/jaes.2022.0076","url":null,"abstract":"Musical instruments are complex sound sources that exhibit directivity patterns that not only vary depending on the frequency, but can also change as a function of the played tone. It is yet unclear whether the directivity variation as a function of the played tone leads to a perceptible difference compared to an auralization that uses an averaged directivity pattern. This paper examines the directivity of 38 musical instruments from a publicly available database and then selects three representative instruments among those with similar radiation characteristics (oboe, violin, and trumpet). To evaluate the listeners’ ability to perceive a difference between auralizations of virtual environments using tone-dependent and averaged directivities, a listening test was conducted using the directivity patterns of the three selected instruments in both anechoic and reverberant conditions. The results show that, in anechoic conditions, listeners can reliably detect differences between the tone-dependent and averaged directivities for the oboe but not for the violin or the trumpet. Nevertheless, in reverberant conditions, listeners can distinguish tone-dependent directivity from averaged directivity for all instruments under study.","PeriodicalId":50008,"journal":{"name":"Journal of the Audio Engineering Society","volume":" ","pages":""},"PeriodicalIF":1.4,"publicationDate":"2023-05-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43673443","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
期刊
Journal of the Audio Engineering Society
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1