Dongqing Zou, Ping Guo, Qiang Wang, Xiaotao Wang, Guangqi Shao, Feng Shi, Jia Li, P. Park
{"title":"Context-aware event-driven stereo matching","authors":"Dongqing Zou, Ping Guo, Qiang Wang, Xiaotao Wang, Guangqi Shao, Feng Shi, Jia Li, P. Park","doi":"10.1109/ICIP.2016.7532523","DOIUrl":null,"url":null,"abstract":"Similarity measuring plays as an import role in stereo matching, whether for visual data from standard cameras or for those from novel sensors such as Dynamic Vision Sensors (DVS). Generally speaking, robust feature descriptors contribute to designing a powerful similarity measurement, as demonstrated by classic stereo matching methods. However, the kind and representative ability of feature descriptors for DVS data are so limited that achieving accurate stereo matching on DVS data becomes very challenging. In this paper, a novel feature descriptor is proposed to improve the accuracy for DVS stereo matching. Our feature descriptor can describe the local context or distribution of the DVS data, contributing to constructing an effective similarity measurement for DVS data matching, yielding an accurate stereo matching result. Our method is evaluated by testing our method on groundtruth data and comparing with various standard stereo methods. Experiments demonstrate the efficiency and effectiveness of our method.","PeriodicalId":6521,"journal":{"name":"2016 IEEE International Conference on Image Processing (ICIP)","volume":"23 1","pages":"1076-1080"},"PeriodicalIF":0.0000,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"19","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2016 IEEE International Conference on Image Processing (ICIP)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICIP.2016.7532523","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 19
Abstract
Similarity measuring plays as an import role in stereo matching, whether for visual data from standard cameras or for those from novel sensors such as Dynamic Vision Sensors (DVS). Generally speaking, robust feature descriptors contribute to designing a powerful similarity measurement, as demonstrated by classic stereo matching methods. However, the kind and representative ability of feature descriptors for DVS data are so limited that achieving accurate stereo matching on DVS data becomes very challenging. In this paper, a novel feature descriptor is proposed to improve the accuracy for DVS stereo matching. Our feature descriptor can describe the local context or distribution of the DVS data, contributing to constructing an effective similarity measurement for DVS data matching, yielding an accurate stereo matching result. Our method is evaluated by testing our method on groundtruth data and comparing with various standard stereo methods. Experiments demonstrate the efficiency and effectiveness of our method.