Multidimensional scaling analysis of head-related transfer functions

F. Wightman, D. Kistler
{"title":"Multidimensional scaling analysis of head-related transfer functions","authors":"F. Wightman, D. Kistler","doi":"10.1109/ASPAA.1993.379987","DOIUrl":null,"url":null,"abstract":"Accurate rendering of auditory objects in a virtual auditory display depends on signal processing that is based on detailed measurements of the human free-field to eardrum transfer function (HRTF). The performance of an auditory display can be severely compromised if the HRTF measurements are not made individually, for each potential user. This requirement could sharply limit the practical application of auditory display technology. Thus, we have been working to develop a standard set of HRTFs that could be used to synthesize veridical virtual auditory objects for all users. Our latest effort along those lines has involved a feature analysis of HRTFs from 15 listeners who demonstrated high proficiency localizing virtual sources. The primary objectives were to quantify the differences among HRTFs, to identify listeners with similar and different HRTFs, and to test the localizability of virtual sources synthesized from the HRTFs of an individual with closely and not closely matched HRTFs. We used a multidimensional scaling algorithm, a statistical procedure which assesses the similarity of a set of objects and/or individuals, to analyze the HRTFs of the 15 listeners. Listeners with similar HRTFs were identified and their ability to localize virtual sources synthesized from the HRTFs of a \"similar\" listener was evaluated. All listeners were able to localize accurately. When these same listeners were tested with virtual sources synthesized from HRTFs that were identified to be \"different\" by the MDS analysis. Both azimuth and elevation of virtual sources were judged less accurately. Although we were able to identify \"typical\" listeners from the MDS analysis, our preliminary data suggest that several alternative sets of HRTFs may be necessary to produce a usable auditory display system.<<ETX>>","PeriodicalId":270576,"journal":{"name":"Proceedings of IEEE Workshop on Applications of Signal Processing to Audio and Acoustics","volume":"10 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1993-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"23","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of IEEE Workshop on Applications of Signal Processing to Audio and Acoustics","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ASPAA.1993.379987","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 23

Abstract

Accurate rendering of auditory objects in a virtual auditory display depends on signal processing that is based on detailed measurements of the human free-field to eardrum transfer function (HRTF). The performance of an auditory display can be severely compromised if the HRTF measurements are not made individually, for each potential user. This requirement could sharply limit the practical application of auditory display technology. Thus, we have been working to develop a standard set of HRTFs that could be used to synthesize veridical virtual auditory objects for all users. Our latest effort along those lines has involved a feature analysis of HRTFs from 15 listeners who demonstrated high proficiency localizing virtual sources. The primary objectives were to quantify the differences among HRTFs, to identify listeners with similar and different HRTFs, and to test the localizability of virtual sources synthesized from the HRTFs of an individual with closely and not closely matched HRTFs. We used a multidimensional scaling algorithm, a statistical procedure which assesses the similarity of a set of objects and/or individuals, to analyze the HRTFs of the 15 listeners. Listeners with similar HRTFs were identified and their ability to localize virtual sources synthesized from the HRTFs of a "similar" listener was evaluated. All listeners were able to localize accurately. When these same listeners were tested with virtual sources synthesized from HRTFs that were identified to be "different" by the MDS analysis. Both azimuth and elevation of virtual sources were judged less accurately. Although we were able to identify "typical" listeners from the MDS analysis, our preliminary data suggest that several alternative sets of HRTFs may be necessary to produce a usable auditory display system.<>
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
头部相关传递函数的多维尺度分析
虚拟听觉显示中听觉对象的准确呈现依赖于基于人体自由场到耳膜传递函数(HRTF)的详细测量的信号处理。如果不针对每个潜在用户单独进行HRTF测量,则听觉显示的性能可能会受到严重损害。这一要求将严重限制听觉显示技术的实际应用。因此,我们一直致力于开发一套标准的hrtf,可用于为所有用户合成真实的虚拟听觉对象。我们在这方面的最新努力包括对来自15名听众的hrtf进行特征分析,这些听众对虚拟源的本地化表现出很高的熟练程度。主要目标是量化hrtf之间的差异,识别具有相似和不同hrtf的听众,并测试由hrtf紧密匹配和不紧密匹配的个体的hrtf合成的虚拟源的可定位性。我们使用了一种多维尺度算法(一种评估一组对象和/或个人相似性的统计程序)来分析15名听众的hrtf。识别具有相似hrtf的侦听器,并评估它们从“相似”侦听器的hrtf合成的虚拟源的本地化能力。所有听众都能准确定位。当使用由MDS分析识别为“不同”的hrtf合成的虚拟源测试这些相同的侦听器时。虚拟源的方位和仰角判断精度较低。虽然我们能够从MDS分析中识别出“典型的”听者,但我们的初步数据表明,为了产生一个可用的听觉显示系统,可能需要几种替代的hrtf集
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Multidimensional scaling analysis of head-related transfer functions Robust adaptive processing of microphone array data for hearing aids Local silencing of room acoustic noise using broadband active noise control Computationally efficient compression of audio signals by means of RIQ-DPCM A simplified source/filter model for percussive sounds
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1