Sung-Wan Kim, Ji-Yong Lee, Doik Kim, Bum-Jae You, N. Doh
{"title":"Human localization based on the fusion of vision and sound system","authors":"Sung-Wan Kim, Ji-Yong Lee, Doik Kim, Bum-Jae You, N. Doh","doi":"10.1109/URAI.2011.6145870","DOIUrl":null,"url":null,"abstract":"In this paper, a method for accurate human localization using a sequential fusion of sound and vision is proposed. Although the sound localization alone works well in most cases, there are situations such as noisy environment and small inter-microphone distance, which may produce wrong or poor results. A vision system also has deficiency, such as limited visual field. To solve these problems we propose a method that combines sound localization and vision in real time. Particularly, a robot finds rough location of the speaker via sound source localization, and then using vision to increase the accuracy of the location. Experimental results show that the proposed method is more accurate and reliable than the results of pure sound localization.","PeriodicalId":385925,"journal":{"name":"2011 8th International Conference on Ubiquitous Robots and Ambient Intelligence (URAI)","volume":"78 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2011-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2011 8th International Conference on Ubiquitous Robots and Ambient Intelligence (URAI)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/URAI.2011.6145870","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3
Abstract
In this paper, a method for accurate human localization using a sequential fusion of sound and vision is proposed. Although the sound localization alone works well in most cases, there are situations such as noisy environment and small inter-microphone distance, which may produce wrong or poor results. A vision system also has deficiency, such as limited visual field. To solve these problems we propose a method that combines sound localization and vision in real time. Particularly, a robot finds rough location of the speaker via sound source localization, and then using vision to increase the accuracy of the location. Experimental results show that the proposed method is more accurate and reliable than the results of pure sound localization.