{"title":"Multidimensional scaling analysis of head-related transfer functions","authors":"F. Wightman, D. Kistler","doi":"10.1109/ASPAA.1993.379987","DOIUrl":null,"url":null,"abstract":"Accurate rendering of auditory objects in a virtual auditory display depends on signal processing that is based on detailed measurements of the human free-field to eardrum transfer function (HRTF). The performance of an auditory display can be severely compromised if the HRTF measurements are not made individually, for each potential user. This requirement could sharply limit the practical application of auditory display technology. Thus, we have been working to develop a standard set of HRTFs that could be used to synthesize veridical virtual auditory objects for all users. Our latest effort along those lines has involved a feature analysis of HRTFs from 15 listeners who demonstrated high proficiency localizing virtual sources. The primary objectives were to quantify the differences among HRTFs, to identify listeners with similar and different HRTFs, and to test the localizability of virtual sources synthesized from the HRTFs of an individual with closely and not closely matched HRTFs. We used a multidimensional scaling algorithm, a statistical procedure which assesses the similarity of a set of objects and/or individuals, to analyze the HRTFs of the 15 listeners. Listeners with similar HRTFs were identified and their ability to localize virtual sources synthesized from the HRTFs of a \"similar\" listener was evaluated. All listeners were able to localize accurately. When these same listeners were tested with virtual sources synthesized from HRTFs that were identified to be \"different\" by the MDS analysis. Both azimuth and elevation of virtual sources were judged less accurately. Although we were able to identify \"typical\" listeners from the MDS analysis, our preliminary data suggest that several alternative sets of HRTFs may be necessary to produce a usable auditory display system.<<ETX>>","PeriodicalId":270576,"journal":{"name":"Proceedings of IEEE Workshop on Applications of Signal Processing to Audio and Acoustics","volume":"10 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1993-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"23","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of IEEE Workshop on Applications of Signal Processing to Audio and Acoustics","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ASPAA.1993.379987","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 23
Abstract
Accurate rendering of auditory objects in a virtual auditory display depends on signal processing that is based on detailed measurements of the human free-field to eardrum transfer function (HRTF). The performance of an auditory display can be severely compromised if the HRTF measurements are not made individually, for each potential user. This requirement could sharply limit the practical application of auditory display technology. Thus, we have been working to develop a standard set of HRTFs that could be used to synthesize veridical virtual auditory objects for all users. Our latest effort along those lines has involved a feature analysis of HRTFs from 15 listeners who demonstrated high proficiency localizing virtual sources. The primary objectives were to quantify the differences among HRTFs, to identify listeners with similar and different HRTFs, and to test the localizability of virtual sources synthesized from the HRTFs of an individual with closely and not closely matched HRTFs. We used a multidimensional scaling algorithm, a statistical procedure which assesses the similarity of a set of objects and/or individuals, to analyze the HRTFs of the 15 listeners. Listeners with similar HRTFs were identified and their ability to localize virtual sources synthesized from the HRTFs of a "similar" listener was evaluated. All listeners were able to localize accurately. When these same listeners were tested with virtual sources synthesized from HRTFs that were identified to be "different" by the MDS analysis. Both azimuth and elevation of virtual sources were judged less accurately. Although we were able to identify "typical" listeners from the MDS analysis, our preliminary data suggest that several alternative sets of HRTFs may be necessary to produce a usable auditory display system.<>